Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test
NASA Technical Reports Server (NTRS)
Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara
2016-01-01
The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Computerized Classification Testing with the Rasch Model
ERIC Educational Resources Information Center
Eggen, Theo J. H. M.
2011-01-01
If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…
Spiegelhalter, David; Grigg, Olivia; Kinsman, Robin; Treasure, Tom
2003-02-01
To investigate the use of the risk-adjusted sequential probability ratio test in monitoring the cumulative occurrence of adverse clinical outcomes. Retrospective analysis of three longitudinal datasets. Patients aged 65 years and over under the care of Harold Shipman between 1979 and 1997, patients under 1 year of age undergoing paediatric heart surgery in Bristol Royal Infirmary between 1984 and 1995, adult patients receiving cardiac surgery from a team of cardiac surgeons in London,UK. Annual and 30-day mortality rates. Using reasonable boundaries, the procedure could have indicated an 'alarm' in Bristol after publication of the 1991 Cardiac Surgical Register, and in 1985 or 1997 for Harold Shipman depending on the data source and the comparator. The cardiac surgeons showed no significant deviation from expected performance. The risk-adjusted sequential probability test is simple to implement, can be applied in a variety of contexts, and might have been useful to detect specific instances of past divergent performance. The use of this and related techniques deserves further attention in the context of prospectively monitoring adverse clinical outcomes.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Expert system for online surveillance of nuclear reactor coolant pumps
Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.
1993-01-01
An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Mutual Information Item Selection in Adaptive Classification Testing
ERIC Educational Resources Information Center
Weissman, Alexander
2007-01-01
A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…
EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.
ERIC Educational Resources Information Center
Frick, Theodore W.; And Others
Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Decay modes of the Hoyle state in 12C
NASA Astrophysics Data System (ADS)
Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.
2018-04-01
Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.
Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W
2017-08-01
The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Sequential ranging integration times in the presence of CW interference in the ranging channel
NASA Technical Reports Server (NTRS)
Mathur, Ashok; Nguyen, Tien
1986-01-01
The Deep Space Network (DSN), managed by the Jet Propulsion Laboratory for NASA, is used primarily for communication with interplanetary spacecraft. The high sensitivity required to achieve planetary communications makes the DSN very susceptible to radio-frequency interference (RFI). In this paper, an analytical model is presented of the performance degradation of the DSN sequential ranging subsystem in the presence of downlink CW interference in the ranging channel. A trade-off between the ranging component integration times and the ranging signal-to-noise ratio to achieve a desired level of range measurement accuracy and the probability of error in the code components is also presented. Numerical results presented illustrate the required trade-offs under various interference conditions.
NASA Technical Reports Server (NTRS)
Braun, W. R.
1981-01-01
Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.
Millroth, Philip; Guath, Mona; Juslin, Peter
2018-06-07
The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
NASA Astrophysics Data System (ADS)
Piatyszek, E.; Voignier, P.; Graillot, D.
2000-05-01
One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.
Perlis, Roy H.; Patrick, Amanda; Smoller, Jordan W.; Wang, Philip S.
2009-01-01
The potential of personalized medicine to transform the treatment of mood disorders has been widely touted in psychiatry, but has not been quantified. We estimated the costs and benefits of a putative pharmacogenetic test for antidepressant response in the treatment of major depressive disorder (MDD) from the societal perspective. Specifically, we performed cost-effectiveness analyses using state-transition probability models incorporating probabilities from the multicenter STAR*D effectiveness study of MDD. Costs and quality-adjusted life years were compared for sequential antidepressant trials, with or without guidance from a pharmacogenetic test for differential response to selective serotonin reuptake inhibitors (SSRIs). Likely SSRI responders received an SSRI, while likely nonresponders received the norepinephrine/dopamine reuptake inhibitor bupropion. For a 40-year-old with major depressive disorder, applying the pharmacogenetic test and using the non-SSRI bupropion for those at higher risk for nonresponse cost $93,520 per additional quality-adjusted life-year (QALY) compared with treating all patients with an SSRI first and switching sequentially in the case of nonremission. Cost/QALY dropped below $50,000 for tests with remission rate ratios as low as 1.5, corresponding to odds ratios ~1.8–2.0. Tests for differential antidepressant response could thus become cost-effective under certain circumstances. These circumstances, particularly availability of alternative treatment strategies and test effect sizes, can be estimated and should be considered before these tests are broadly applied in clinical settings. PMID:19494805
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isselhardt, Brett H.
2011-09-01
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/ 238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser inmore » a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.« less
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2012-09-01
Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Carpenter, J. Russell
2016-01-01
The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Carpenter, Russell
2016-01-01
The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Technical Reports Prepared Under Contract N00014-76-C-0475.
1987-05-29
264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Matheny, Michael E; Normand, Sharon-Lise T; Gross, Thomas P; Marinac-Dabic, Danica; Loyo-Berrios, Nilsa; Vidi, Venkatesan D; Donnelly, Sharon; Resnic, Frederic S
2011-12-14
Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery. A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year. We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80). The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
Performance Analysis of Ranging Techniques for the KPLO Mission
NASA Astrophysics Data System (ADS)
Park, Sungjoon; Moon, Sangman
2018-03-01
In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.
Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.
Remillard, Gilbert
2011-07-01
There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.
Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S
2013-05-01
This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.
The Role of Orthotactic Probability in Incidental and Intentional Vocabulary Acquisition L1 and L2
ERIC Educational Resources Information Center
Bordag, Denisa; Kirschenbaum, Amit; Rogahn, Maria; Tschirner, Erwin
2017-01-01
Four experiments were conducted to examine the role of orthotactic probability, i.e. the sequential letter probability, in the early stages of vocabulary acquisition by adult native speakers and advanced learners of German. The results show different effects for orthographic probability in incidental and intentional vocabulary acquisition: Whereas…
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
NASA Astrophysics Data System (ADS)
Basieva, Irina; Khrennikov, Andrei
2015-10-01
In this paper we study the problem of a possibility to use quantum observables to describe a possible combination of the order effect with sequential reproducibility for quantum measurements. By the order effect we mean a dependence of probability distributions (of measurement results) on the order of measurements. We consider two types of the sequential reproducibility: adjacent reproducibility (A-A) (the standard perfect repeatability) and separated reproducibility(A-B-A). The first one is reproducibility with probability 1 of a result of measurement of some observable A measured twice, one A measurement after the other. The second one, A-B-A, is reproducibility with probability 1 of a result of A measurement when another quantum observable B is measured between two A's. Heuristically, it is clear that the second type of reproducibility is complementary to the order effect. We show that, surprisingly, this may not be the case. The order effect can coexist with a separated reproducibility as well as adjacent reproducibility for both observables A and B. However, the additional constraint in the form of separated reproducibility of the B-A-B type makes this coexistence impossible. The problem under consideration was motivated by attempts to apply the quantum formalism outside of physics, especially, in cognitive psychology and psychophysics. However, it is also important for foundations of quantum physics as a part of the problem about the structure of sequential quantum measurements.
Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex
Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire
2015-01-01
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Optimal sequential measurements for bipartite state discrimination
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.; Weir, Graeme
2017-05-01
State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
Predicted sequence of cortical tau and amyloid-β deposition in Alzheimer disease spectrum.
Cho, Hanna; Lee, Hye Sun; Choi, Jae Yong; Lee, Jae Hoon; Ryu, Young Hoon; Lee, Myung Sik; Lyoo, Chul Hyoung
2018-04-17
We investigated sequential order between tau and amyloid-β (Aβ) deposition in Alzheimer disease spectrum using a conditional probability method. Two hundred twenty participants underwent 18 F-flortaucipir and 18 F-florbetaben positron emission tomography scans and neuropsychological tests. The presence of tau and Aβ in each region and impairment in each cognitive domain were determined by Z-score cutoffs. By comparing pairs of conditional probabilities, the sequential order of tau and Aβ deposition were determined. Probability for the presence of tau in the entorhinal cortex was higher than that of Aβ in all cortical regions, and in the medial temporal cortices, probability for the presence of tau was higher than that of Aβ. Conversely, in the remaining neocortex above the inferior temporal cortex, probability for the presence of Aβ was always higher than that of tau. Tau pathology in the entorhinal cortex may appear earlier than neocortical Aβ and may spread in the absence of Aβ within the neighboring medial temporal regions. However, Aβ may be required for massive tau deposition in the distant cortical areas. Copyright © 2018 Elsevier Inc. All rights reserved.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geard, C.R.
1983-01-01
In root meristems of Tradescantia clone 02 (developed by Sparrow and his colleagues for mutation studies), X-rays interfere with the progression of cells through the cell cycle and induce chromosomal aberrations in a dose-dependent manner consistent with linear-quadratic kinetics. Sequential mitotic cell accumulations after irradiation indicate that sensitivity to aberration induction is probably greatest in cells from late S to early G2, with chromatid interchanges the most frequent aberration type and all aberrations consistent with initiation from the interaction between two lesions. The ratio of the coefficients in the linear (..cap alpha..) and the quadratic (..beta..) terms (..cap alpha../..beta..) ismore » equal to the dose average of specific energy produced by individual particles in the site where interaction takes place. The ratio ..cap alpha../..beta.. for chromosomal aberrations is similar to that previously found for X-ray-induced mutation in Tradescantia stamen hairs, supporting the proposal that radiation-induced mutational events are due to chromosomal aberrations with interaction distances of about 1..mu..m. Abrahamson and co-workers have noted that both ..cap alpha../..beta.. ratios appear to be related to nuclear target size and are similar for chromosomal and mutational endpoints in the same organism. These findings support this concept; however, it is apparent that any situation which diminishes yield at high doses (e.g., mitotic delay) will probably affect the ..beta.. component. 23 references, 5 figures, 2 tables.« less
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2004-10-12
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2003-04-22
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2006-02-14
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2001-01-01
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a SPRT sequential probability ratio test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D
2011-12-01
Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C
2012-07-01
DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.
Human Inferences about Sequences: A Minimal Transition Probability Model
2016-01-01
The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543
Adaptive sequential Bayesian classification using Page's test
NASA Astrophysics Data System (ADS)
Lynch, Robert S., Jr.; Willett, Peter K.
2002-03-01
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Learning in Reverse: Eight-Month-Old Infants Track Backward Transitional Probabilities
ERIC Educational Resources Information Center
Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.
2009-01-01
Numerous recent studies suggest that human learners, including both infants and adults, readily track sequential statistics computed between adjacent elements. One such statistic, transitional probability, is typically calculated as the likelihood that one element predicts another. However, little is known about whether listeners are sensitive to…
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.
Baron, Jonathan; Gürçay, Burcu
2017-05-01
The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.
Jang, Cheng-Shin
2008-03-15
This work probabilistically explored a safe utilization ratio (UR) of groundwater in fish ponds located in blackfoot disease hyperendemic areas in terms of the regulation of arsenic (As) concentrations. Sequential indicator simulation was used to reproduce As concentrations in groundwater and to propagate their uncertainty. Corresponding URs of groundwater were obtained from the relationship of mass balance between reproduced As concentrations in groundwater and the As regulation in farmed fish ponds. Three levels were adopted to evaluate the UR - UR> or =0.5, 0.5>UR> or =0.1 and UR<0.1. The high probability of the UR> or =0.5 level presents in the northern and southern regions where groundwater can be a major water source. The high probability of the 0.5>UR> or =0.1 level is mainly distributed in the central-coastal, central-eastern and southeastern regions where groundwater should be considered as a subordinate water source. Being available, extra surface water has priority over providing aquacultural needs of the regions with the high probability of the UR> or =0.5 and 0.5>UR> or =0.1 levels. In the regions with the high probability of the UR<0.1 level, in the central-coastal and southwestern regions, groundwater utilization should be reduced substantially or even prohibited completely for no adverse effects on human health.
Protein classification using sequential pattern mining.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2006-01-01
Protein classification in terms of fold recognition can be employed to determine the structural and functional properties of a newly discovered protein. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. One of the most efficient SPM algorithms, cSPADE, is employed for protein primary structure analysis. Then a classifier uses the extracted sequential patterns for classifying proteins of unknown structure in the appropriate fold category. The proposed methodology exhibited an overall accuracy of 36% in a multi-class problem of 17 candidate categories. The classification performance reaches up to 65% when the three most probable protein folds are considered.
An Alternative Approach to the Total Probability Formula. Classroom Notes
ERIC Educational Resources Information Center
Wu, Dane W. Wu; Bangerter, Laura M.
2004-01-01
Given a set of urns, each filled with a mix of black chips and white chips, what is the probability of drawing a black chip from the last urn after some sequential random shifts of chips among the urns? The Total Probability Formula (TPF) is the common tool to solve such a problem. However, when the number of urns is more than two and the number…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.
1990-08-01
This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less
ERIC Educational Resources Information Center
Abla, Dilshat; Okanoya, Kazuo
2008-01-01
Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…
Zhang, Ying; Ji, Yajie; Li, Jianwei; Lei, Li; Wu, Siyu; Zuo, Wenjia; Jia, Xiaoqing; Wang, Yujie; Mo, Miao; Zhang, Na; Shen, Zhenzhou; Wu, Jiong; Shao, Zhimin; Liu, Guangyu
2018-04-01
To investigate ovarian function and therapeutic efficacy among estrogen receptor (ER)-positive, premenopausal breast cancer patients treated with gonadotropin-releasing hormone agonist (GnRHa) and chemotherapy simultaneously or sequentially. This study was a phase 3, open-label, parallel, randomized controlled trial (NCT01712893). Two hundred sixteen premenopausal patients (under 45 years) diagnosed with invasive ER-positive breast cancer were enrolled from July 2009 to May 2013 and randomized at a 1:1 ratio to receive (neo)adjuvant chemotherapy combined with sequential or simultaneous GnRHa treatment. All patients were advised to receive GnRHa for at least 2 years. The primary outcome was the incidence of early menopause, defined as amenorrhea lasting longer than 12 months after the last chemotherapy or GnRHa dose, with postmenopausal or unknown follicle-stimulating hormone and estradiol levels. The menstrual resumption period and survivals were the secondary endpoints. The median follow-up time was 56.9 months (IQR 49.5-72.4 months). One hundred and eight patients were enrolled in each group. Among them, 92 and 78 patients had complete primary endpoint data in the sequential and simultaneous groups, respectively. The rates of early menopause were 22.8% (21/92) in the sequential group and 23.1% (18/78) in the simultaneous group [simultaneous vs. sequential: OR 1.01 (95% CI 0.50-2.08); p = 0.969; age-adjusted OR 1.13; (95% CI 0.54-2.37); p = 0.737]. The median menstruation resumption period was 12.0 (95% CI 9.3-14.7) months and 10.3 (95% CI 8.2-12.4) months for the sequential and simultaneous groups, respectively [HR 0.83 (95% CI 0.59-1.16); p = 0.274; age-adjusted HR 0.90 (95%CI 0.64-1.27); p = 0.567]. No significant differences were evident for disease-free survival (p = 0.290) or overall survival (p = 0.514) between the two groups. For ER-positive premenopausal patients, the sequential use of GnRHa and chemotherapy showed ovarian preservation and survival outcomes that were no worse than simultaneous use. The application of GnRHa can probably be delayed until menstruation resumption after chemotherapy.
Cost-Utility Analysis of Cochlear Implantation in Australian Adults.
Foteff, Chris; Kennedy, Steven; Milton, Abul Hasnat; Deger, Melike; Payk, Florian; Sanderson, Georgina
2016-06-01
Sequential and simultaneous bilateral cochlear implants are emerging as appropriate treatment options for Australian adults with sensory deficits in both cochleae. Current funding of Australian public hospitals does not provide for simultaneous bilateral cochlear implantation (CI) as a separate surgical procedure. Previous cost-effectiveness studies of sequential and simultaneous bilateral CI assumed 100% of unilaterally treated patients' transition to a sequential bilateral CI. This assumption does not place cochlear implantation in the context of the generally treated population. When mutually exclusive treatment options exist, such as unilateral CI, sequential bilateral CI, and simultaneous bilateral CI, the mean costs of the treated populations are weighted in the calculation of incremental cost-utility ratios. The objective was to evaluate the cost-utility of bilateral hearing aids (HAs) compared with unilateral, sequential, and simultaneous bilateral CI in Australian adults with bilateral severe to profound sensorineural hearing loss. Cost-utility analysis of secondary sources input to a Markov model. Australian health care perspective, lifetime horizon with costs and outcomes discounted 5% annually. Bilateral HAs as treatment for bilateral severe to profound sensorineural hearing loss compared with unilateral, sequential, and simultaneous bilateral CI. Incremental costs per quality adjusted life year (AUD/QALY). When compared with bilateral hearing aids the incremental cost-utility ratio for the CI treatment population was AUD11,160/QALY. The incremental cost-utility ratio was weighted according to the number of patients treated unilaterally, sequentially, and simultaneously, as these were mutually exclusive treatment options. No peer-reviewed articles have reported the incremental analysis of cochlear implantation in a continuum of care for surgically treated populations with bilateral severe to profound sensorineural hearing loss. Unilateral, sequential, and simultaneous bilateral CI were cost-effective when compared with bilateral hearing aids. Technologies that reduce the total number of visits for a patient could introduce additional cost efficiencies into clinical practice.
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Ganapathy, Kavina; Sowmithra, Sowmithra; Bhonde, Ramesh; Datta, Indrani
2016-07-16
The neuron-glia ratio is of prime importance for maintaining the physiological homeostasis of neuronal and glial cells, and especially crucial for dopaminergic neurons because a reduction in glial density has been reported in postmortem reports of brains affected by Parkinson's disease. We thus aimed at developing an in vitro midbrain culture which would replicate a similar neuron-glia ratio to that in in vivo adult midbrain while containing a similar number of dopaminergic neurons. A sequential culture technique was adopted to achieve this. Neural progenitors (NPs) were generated by the hanging-drop method and propagated as 3D neurospheres followed by the derivation of outgrowth from these neurospheres on a chosen extracellular matrix. The highest proliferation was observed in neurospheres from day in vitro (DIV) 5 through MTT and FACS analysis of Ki67 expression. FACS analysis using annexin/propidium iodide showed an increase in the apoptotic population from DIV 8. DIV 5 neurospheres were therefore selected for deriving the differentiated outgrowth of midbrain on a poly-L-lysine-coated surface. Quantitative RT-PCR showed comparable gene expressions of the mature neuronal marker β-tubulin III, glial marker GFAP and dopaminergic marker tyrosine hydroxylase (TH) as compared to in vivo adult rat midbrain. The FACS analysis showed a similar neuron-glia ratio obtained by the sequential culture in comparison to adult rat midbrain. The yield of β-tubulin III and TH was distinctly higher in the sequential culture in comparison to 2D culture, which showed a higher yield of GFAP immunopositive cells. Functional characterization indicated that both the constitutive and inducible (KCl and ATP) release of dopamine was distinctly higher in the sequential culture than the 2D culture. Thus, the sequential culture technique succeeded in the initial enrichment of NPs in 3D neurospheres, which in turn resulted in an optimal attainment of the neuron-glia ratio on outgrowth culture from these neurospheres. © 2016 S. Karger AG, Basel.
Who is most affected by prenatal alcohol exposure: Boys or girls?
May, Philip A; Tabachnick, Barbara; Hasken, Julie M; Marais, Anna-Susan; de Vries, Marlene M; Barnard, Ronel; Joubert, Belinda; Cloete, Marise; Botha, Isobel; Kalberg, Wendy O; Buckley, David; Burroughs, Zachary R; Bezuidenhout, Heidre; Robinson, Luther K; Manning, Melanie A; Adnams, Colleen M; Seedat, Soraya; Parry, Charles D H; Hoyme, H Eugene
2017-08-01
To examine outcomes among boys and girls that are associated with prenatal alcohol exposure. Boys and girls with fetal alcohol spectrum disorders (FASD) and randomly-selected controls were compared on a variety of physical and neurobehavioral traits. Sex ratios indicated that heavy maternal binge drinking may have significantly diminished viability to birth and survival of boys postpartum more than girls by age seven. Case control comparisons of a variety of physical and neurobehavioral traits at age seven indicate that both sexes were affected similarly for a majority of variables. However, alcohol-exposed girls had significantly more dysmorphology overall than boys and performed significantly worse on non-verbal IQ tests than males. A three-step sequential regression analysis, controlling for multiple covariates, further indicated that dysmorphology among girls was significantly more associated with five maternal drinking variables and three distal maternal risk factors. However, the overall model, which included five associated neurobehavioral measures at step three, was not significant (p=0.09, two-tailed test). A separate sequential logistic regression analysis of predictors of a FASD diagnosis, however, indicated significantly more negative outcomes overall for girls than boys (Nagelkerke R 2 =0.42 for boys and 0.54 for girls, z=-2.9, p=0.004). Boys and girls had mostly similar outcomes when prenatal alcohol exposure was linked to poor physical and neurocognitive development. Nevertheless, sex ratios implicate lower viability and survival of males by first grade, and girls have more dysmorphology and neurocognitive impairment than boys resulting in a higher probability of a FASD diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.
2012-01-01
Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
The Effects of the Previous Outcome on Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2014-01-01
This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
A computational theory for the classification of natural biosonar targets based on a spike code.
Müller, Rolf
2003-08-01
A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.
The impact of eyewitness identifications from simultaneous and sequential lineups.
Wright, Daniel B
2007-10-01
Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.
Signal Detection and Monitoring Based on Longitudinal Healthcare Data
Suling, Marc; Pigeot, Iris
2012-01-01
Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald’s sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced. PMID:24300373
Sewsynker-Sukai, Yeshona; Gueguim Kana, E B
2017-11-01
This study presents a sequential sodium phosphate dodecahydrate (Na 3 PO 4 ·12H 2 O) and zinc chloride (ZnCl 2 ) pretreatment to enhance delignification and enzymatic saccharification of corn cobs. The effects of process parameters of Na 3 PO 4 ·12H 2 O concentration (5-15%), ZnCl 2 concentration (1-5%) and solid to liquid ratio (5-15%) on reducing sugar yield from corn cobs were investigated. The sequential pretreatment model was developed and optimized with a high coefficient of determination value (0.94). Maximum reducing sugar yield of 1.10±0.01g/g was obtained with 14.02% Na 3 PO 4 ·12H 2 O, 3.65% ZnCl 2 and 5% solid to liquid ratio. Scanning electron microscopy (SEM) and Fourier Transform Infrared analysis (FTIR) showed major lignocellulosic structural changes after the optimized sequential pretreatment with 63.61% delignification. In addition, a 10-fold increase in the sugar yield was observed compared to previous reports on the same substrate. This sequential pretreatment strategy was efficient for enhancing enzymatic saccharification of corn cobs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS
Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason
2009-01-01
Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537
Auditory Discrimination of Frequency Ratios: The Octave Singularity
ERIC Educational Resources Information Center
Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent
2013-01-01
Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…
Cost-effectiveness of allopurinol and febuxostat for the management of gout.
Jutkowitz, Eric; Choi, Hyon K; Pizzi, Laura T; Kuntz, Karen M
2014-11-04
Gout is the most common inflammatory arthritis in the United States. To evaluate the cost-effectiveness of urate-lowering treatment strategies for the management of gout. Markov model. Published literature and expert opinion. Patients for whom allopurinol or febuxostat is a suitable initial urate-lowering treatment. Lifetime. Health care payer. 5 urate-lowering treatment strategies were evaluated: no treatment; allopurinol- or febuxostat-only therapy; allopurinol-febuxostat sequential therapy; and febuxostat-allopurinol sequential therapy. Two dosing scenarios were investigated: fixed dose (80 mg of febuxostat daily, 0.80 success rate; 300 mg of allopurinol daily, 0.39 success rate) and dose escalation (≤120 mg of febuxostat daily, 0.82 success rate; ≤800 mg of allopurinol daily, 0.78 success rate). Discounted costs, discounted quality-adjusted life-years, and incremental cost-effectiveness ratios. In both dosing scenarios, allopurinol-only therapy was cost-saving. Dose-escalation allopurinol-febuxostat sequential therapy was more costly but more effective than dose-escalation allopurinol therapy, with an incremental cost-effectiveness ratio of $39 400 per quality-adjusted life-year. The relative rankings of treatments did not change. Our results were relatively sensitive to several potential variations of model assumptions; however, the cost-effectiveness ratios of dose escalation with allopurinol-febuxostat sequential therapy remained lower than the willingness-to-pay threshold of $109 000 per quality-adjusted life-year. Long-term outcome data for patients with gout, including medication adherence, are limited. Allopurinol single therapy is cost-saving compared with no treatment. Dose-escalation allopurinol-febuxostat sequential therapy is cost-effective compared with accepted willingness-to-pay thresholds. Agency for Healthcare Research and Quality.
Kennard, Betsy D; Emslie, Graham J; Mayes, Taryn L; Nakonezny, Paul A; Jones, Jessica M; Foxwell, Aleksandra A; King, Jessica
2014-10-01
The authors evaluated a sequential treatment strategy of fluoxetine and relapse-prevention cognitive-behavioral therapy (CBT) to determine effects on remission and relapse in youths with major depressive disorder. Youths 8-17 years of age with major depression were treated openly with fluoxetine for 6 weeks. Those with an adequate response (defined as a reduction of 50% or more on the Children's Depression Rating Scale-Revised [CDRS-R]) were randomly assigned to receive continued medication management alone or continued medication management plus CBT for an additional 6 months. The CBT was modified to address residual symptoms and was supplemented by well-being therapy. Primary outcome measures were time to remission (with remission defined as a CDRS-R score of 28 or less) and rate of relapse (with relapse defined as either a CDRS-R score of 40 or more with a history of 2 weeks of symptom worsening, or clinical deterioration). Of the 200 participants enrolled in acute-phase treatment, 144 were assigned to continuation treatment with medication management alone (N=69) or medication management plus CBT (N=75). During the 30-week continuation treatment period, time to remission did not differ significantly between treatment groups (hazard ratio=1.26, 95% CI=0.87, 1.82). However, the medication management plus CBT group had a significantly lower risk of relapse than the medication management only group (hazard ratio=0.31, 95% CI=0.13, 0.75). The estimated probability of relapse by week 30 was lower with medication management plus CBT than with medication management only (9% compared with 26.5%). Continuation-phase relapse-prevention CBT was effective in reducing the risk of relapse but not in accelerating time to remission in children and adolescents with major depressive disorder.
Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.
2008-01-01
Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897
Ringdén, Olle; Labopin, Myriam; Schmid, Christoph; Sadeghi, Behnam; Polge, Emmanuelle; Tischer, Johanna; Ganser, Arnold; Michallet, Mauricette; Kanz, Lothar; Schwerdtfeger, Rainer; Nagler, Arnon; Mohty, Mohamad
2017-02-01
This study analysed the outcome of 267 patients with relapse/refractory acute myeloid leukaemia (AML) who received sequential chemotherapy including fludarabine, cytarabine and amsacrine followed by reduced-intensity conditioning (RIC) and allogeneic haematopoietic stem cell transplantation (HSCT). The transplants in 77 patients were from matched sibling donors (MSDs) and those in 190 patients were from matched unrelated donors. Most patients (94·3%) were given anti-T-cell antibodies. The incidence of acute graft-versus-host disease (GVHD) of grades II-IV was 32·1% and that of chronic GVHD was 30·2%. The 3-year probability of non-relapse mortality (NRM) was 25·9%, that of relapse was 48·5%, that of GVHD-free and relapse-free survival (GRFS) was 17·8% and that of leukaemia-free survival (LFS) was 25·6%. In multivariate analysis, unrelated donor recipients more frequently had acute GVHD of grades II-IV [hazard ratio (HR) = 1·98, P = 0·017] and suffered less relapses (HR = 0·62, P = 0·01) than MSD recipients. Treatment with anti-T-cell antibodies reduced NRM (HR = 0·35, P = 0·01) and improved survival (HR = 0·49, P = 0·01), GRFS (HR = 0·37, P = 0·0004) and LFS (HR = 0·46, P = 0·005). Thus, sequential chemotherapy followed by RIC HSCT and use of anti-T-cell antibodies seems promising in patients with refractory AML. © 2016 John Wiley & Sons Ltd.
Cost-effectiveness of pediatric bilateral cochlear implantation in Spain.
Pérez-Martín, Jorge; Artaso, Miguel A; Díez, Francisco J
2017-12-01
To determine the incremental cost-effectiveness of bilateral versus unilateral cochlear implantation for 1-year-old children suffering from bilateral sensorineural severe to profound hearing loss from the perspective of the Spanish public health system. Cost-utility analysis. We conducted a general-population survey to estimate the quality-of-life increase contributed by the second implant. We built a Markov influence diagram and evaluated it for a life-long time horizon with a 3% discount rate in the base case. The incremental cost-effectiveness ratio of simultaneous bilateral implantation with respect to unilateral implantation for 1-year-old children with severe to profound deafness is €10,323 per quality-adjusted life year (QALY). For sequential bilateral implantation, it rises to €11,733/QALY. Both options are cost-effective for the Spanish health system, whose willingness to pay is estimated at around €30,000/QALY. The probabilistic sensitivity analysis shows that the probability of bilateral implantation being cost-effective reaches 100% for that cost-effectiveness threshold. Bilateral implantation is clearly cost-effective for the population considered. If possible, it should be done simultaneously (i.e., in one surgical operation), because it is as safe and effective as sequential implantation, and saves costs for the system and for users and their families. Sequential implantation is also cost-effective for children who have received the first implant recently, but it is difficult to determine when it ceases to be so because of the lack of detailed data. These results are specific for Spain, but the model can easily be adapted to other countries. 2C. Laryngoscope, 127:2866-2872, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-05-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-06-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
Diagnostic causal reasoning with verbal information.
Meder, Björn; Mayrhofer, Ralf
2017-08-01
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people's inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., "X occasionally causes A"). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes' prior probabilities and the effects' likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yin, Kedong; Liu, Hao; Harrison, Paul J.
2017-05-01
We hypothesize that phytoplankton have the sequential nutrient uptake strategy to maintain nutrient stoichiometry and high primary productivity in the water column. According to this hypothesis, phytoplankton take up the most limiting nutrient first until depletion, continue to draw down non-limiting nutrients and then take up the most limiting nutrient rapidly when it is available. These processes would result in the variation of ambient nutrient ratios in the water column around the Redfield ratio. We used high-resolution continuous vertical profiles of nutrients, nutrient ratios and on-board ship incubation experiments to test this hypothesis in the Strait of Georgia. At the surface in summer, ambient NO3- was depleted with excess PO43- and SiO4- remaining, and as a result, both N : P and N : Si ratios were low. The two ratios increased to about 10 : 1 and 0. 45 : 1, respectively, at 20 m. Time series of vertical profiles showed that the leftover PO43- continued to be removed, resulting in additional phosphorus storage by phytoplankton. The N : P ratios at the nutricline in vertical profiles responded differently to mixing events. Field incubation of seawater samples also demonstrated the sequential uptake of NO3- (the most limiting nutrient) and then PO43- and SiO4- (the non-limiting nutrients). This sequential uptake strategy allows phytoplankton to acquire additional cellular phosphorus and silicon when they are available and wait for nitrogen to become available through frequent mixing of NO3- (or pulsed regenerated NH4). Thus, phytoplankton are able to maintain high productivity and balance nutrient stoichiometry by taking advantage of vigorous mixing regimes with the capacity of the stoichiometric plasticity. To our knowledge, this is the first study to show the in situ dynamics of continuous vertical profiles of N : P and N : Si ratios, which can provide insight into the in situ dynamics of nutrient stoichiometry in the water column and the inference of the transient status of phytoplankton nutrient stoichiometry in the coastal ocean.
USDA-ARS?s Scientific Manuscript database
We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...
Sequential and simultaneous choices: testing the diet selection and sequential choice models.
Freidin, Esteban; Aw, Justine; Kacelnik, Alex
2009-03-01
We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.
Meisters, Julia; Diedenhofen, Birk; Musch, Jochen
2018-04-20
For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent's response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect's position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.
Analysis of SET pulses propagation probabilities in sequential circuits
NASA Astrophysics Data System (ADS)
Cai, Shuo; Yu, Fei; Yang, Yiqun
2018-05-01
As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.
Greenberg, E. Robert; Anderson, Garnet L.; Morgan, Douglas R.; Torres, Javier; Chey, William D.; Bravo, Luis Eduardo; Dominguez, Ricardo L.; Ferreccio, Catterina; Herrero, Rolando; Lazcano-Ponce, Eduardo C.; Meza-Montenegro, Mercedes María; Peña, Rodolfo; Peña, Edgar M.; Salazar-Martínez, Eduardo; Correa, Pelayo; Martínez, María Elena; Valdivieso, Manuel; Goodman, Gary E.; Crowley, John J.; Baker, Laurence H.
2011-01-01
Summary Background Evidence from Europe, Asia, and North America suggests that standard three-drug regimens of a proton pump inhibitor plus amoxicillin and clarithromycin are significantly less effective for eradicating Helicobacter pylori (H. pylori) infection than five-day concomitant and ten-day sequential four-drug regimens that include a nitroimidazole. These four-drug regimens also entail fewer antibiotic doses and thus may be suitable for eradication programs in low-resource settings. Studies are limited from Latin America, however, where the burden of H. pylori-associated diseases is high. Methods We randomised 1463 men and women ages 21–65 selected from general populations in Chile, Colombia, Costa Rica, Honduras, Nicaragua, and Mexico (two sites) who tested positive for H. pylori by a urea breath test (UBT) to: 14 days of lansoprazole, amoxicillin, and clarithromycin (standard therapy); five days of lansoprazole, amoxicillin, clarithromycin, and metronidazole (concomitant therapy); or five days of lansoprazole and amoxicillin followed by five of lansoprazole, clarithromycin, and metronidazole (sequential therapy). Eradication was assessed by UBT six–eight weeks after randomisation. Findings In intention-to-treat analyses, the probability of eradication with standard therapy was 82·2%, which was 8·6% higher (95% adjusted CI: 2·6%, 14·5%) than with concomitant therapy (73·6%) and 5·6% higher (95% adjusted CI: −0·04%, 11·6%) than with sequential therapy (76·5%). In analyses limited to the 1314 participants who adhered to their assigned therapy, the probabilities of eradication were 87·1%, 78·7%, and 81·1% with standard, concomitant, and sequential therapies, respectively. Neither four-drug regimen was significantly better than standard triple therapy in any of the seven sites. Interpretation Standard 14-day triple-drug therapy is preferable to five-day concomitant or ten-day sequential four-drug regimens as empiric therapy for H. pylori among diverse Latin American populations. Funding Bill & Melinda Gates Foundation and US National Institutes of Health. PMID:21777974
Tai, Yiping; McBride, Murray B; Li, Zhian
2013-03-30
In the present study, we evaluated a commonly employed modified Bureau Communautaire de Référence (BCR test) 3-step sequential extraction procedure for its ability to distinguish forms of solid-phase Pb in soils with different sources and histories of contamination. When the modified BCR test was applied to mineral soils spiked with three forms of Pb (pyromorphite, hydrocerussite and nitrate salt), the added Pb was highly susceptible to dissolution in the operationally-defined "reducible" or "oxide" fraction regardless of form. When three different materials (mineral soil, organic soil and goethite) were spiked with soluble Pb nitrate, the BCR sequential extraction profiles revealed that soil organic matter was capable of retaining Pb in more stable and acid-resistant forms than silicate clay minerals or goethite. However, the BCR sequential extraction for field-collected soils with known and different sources of Pb contamination was not sufficiently discriminatory in the dissolution of soil Pb phases to allow soil Pb forms to be "fingerprinted" by this method. It is concluded that standard sequential extraction procedures are probably not very useful in predicting lability and bioavailability of Pb in contaminated soils. Copyright © 2013 Elsevier B.V. All rights reserved.
Sequential state discrimination and requirement of quantum dissonance
NASA Astrophysics Data System (ADS)
Pang, Chao-Qian; Zhang, Fu-Lin; Xu, Li-Fang; Liang, Mai-Lin; Chen, Jing-Ling
2013-11-01
We study the procedure for sequential unambiguous state discrimination. A qubit is prepared in one of two possible states and measured by two observers Bob and Charlie sequentially. A necessary condition for the state to be unambiguously discriminated by Charlie is the absence of entanglement between the principal qubit, prepared by Alice, and Bob's auxiliary system. In general, the procedure for both Bob and Charlie to recognize between two nonorthogonal states conclusively relies on the availability of quantum discord which is precisely the quantum dissonance when the entanglement is absent. In Bob's measurement, the left discord is positively correlated with the information extracted by Bob, and the right discord enhances the information left to Charlie. When their product achieves its maximum the probability for both Bob and Charlie to identify the state achieves its optimal value.
Elmer, Jonathan; Scutella, Michael; Pullalarevu, Raghevesh; Wang, Bo; Vaghasia, Nishit; Trzeciak, Stephen; Rosario-Rivera, Bedda L.; Guyette, Francis X.; Rittenberger, Jon C.; Dezfulian, Cameron
2014-01-01
Purpose Previous observational studies have inconsistently associated early hyperoxia with worse outcomes after cardiac arrest and have methodological limitations. We tested this association using a high-resolution database controlling for multiple disease-specific markers of severity of illness and care processes. Methods This was a retrospective analysis of a single-center, prospective registry of consecutive cardiac arrest patients. We included patients who survived and were mechanically ventilated ≥24h after arrest. Our main exposure was arterial oxygen tension (PaO2), which we categorized hourly for 24 hours as severe hyperoxia (>300mmHg), moderate or probable hyperoxia (101-299mmHg), normoxia (60-100mmHg) or hypoxia (<60mmHg). We controlled for Utstein-style covariates, markers of disease severity and markers of care responsiveness. We performed unadjusted and multiple logistic regression to test the association between oxygen exposure and survival to discharge, and used ordered logistic regression to test the association of oxygen exposure with neurological outcome and Sequential Organ Failure Assessment (SOFA) score at 24h. Results Of 184 patients, 36% were exposed to severe hyperoxia and overall mortality was 54%. Severe hyperoxia, but not moderate or probable hyperoxia, was associated with decreased survival in both unadjusted and adjusted analysis (adjusted odds ratio (OR) for survival 0.83 per hour exposure, P=0.04). Moderate or probable hyperoxia was not associated with survival but was associated with improved SOFA score 24h (OR 0.92, P<0.01). Conclusion Severe hyperoxia was independently associated with decreased survival to hospital discharge. Moderate or probable hyperoxia was not associated with decreased survival and was associated with improved organ function at 24h. PMID:25472570
Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description
ERIC Educational Resources Information Center
Hogarth, Robin M.; Soyer, Emre
2011-01-01
Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…
Olusesi, A D; Oyeniran, O
2017-05-01
Few studies have compared bilateral same-day with staged tympanoplasty using cartilage graft materials. A prospective randomised observational study was performed of 38 chronic suppurative otitis media patients (76 ears) who were assigned to undergo bilateral sequential same-day tympanoplasty (18 patients, 36 ears) or bilateral sequential tympanoplasty performed 3 months apart (20 patients, 40 ears). Disease duration, intra-operative findings, combined duration of surgery, post-operative graft appearance at 6 weeks, post-operative complications, re-do rate and relative cost of surgery were recorded. Tympanic membrane perforations were predominantly subtotal (p = 0.36, odds ratio = 0.75). Most grafts were harvested from the conchal cartilage and fewer from the tragus (p = 0.59, odds ratio = 1.016). Types of complication, post-operative hearing gain and revision rates were similar in both patient groups. Surgical outcomes are not significantly different for same-day and bilateral cartilage tympanoplasty, but same-day surgery has the added benefit of a lower cost.
Three-body dissociation of OCS3+: Separating sequential and concerted pathways
NASA Astrophysics Data System (ADS)
Kumar, Herendra; Bhatt, Pragya; Safvan, C. P.; Rajput, Jyoti
2018-02-01
Events from the sequential and concerted modes of the fragmentation of OCS3+ that result in coincident detection of fragments C+, O+, and S+ have been separated using a newly proposed representation. An ion beam of 1.8 MeV Xe9+ is used to make the triply charged molecular ion, with the fragments being detected by a recoil ion momentum spectrometer. By separating events belonging exclusively to the sequential mode of breakup, the electronic states of the intermediate molecular ion (CO2+ or CS2+) involved are determined, and from the kinetic energy release spectra, it is shown that the low lying excited states of the parent OCS3+ are responsible for this mechanism. An estimate of branching ratios of events coming from sequential versus concerted mode is presented.
Sequential Versus Concurrent Trastuzumab in Adjuvant Chemotherapy for Breast Cancer
Perez, Edith A.; Suman, Vera J.; Davidson, Nancy E.; Gralow, Julie R.; Kaufman, Peter A.; Visscher, Daniel W.; Chen, Beiyun; Ingle, James N.; Dakhil, Shaker R.; Zujewski, JoAnne; Moreno-Aspitia, Alvaro; Pisansky, Thomas M.; Jenkins, Robert B.
2011-01-01
Purpose NCCTG (North Central Cancer Treatment Group) N9831 is the only randomized phase III trial evaluating trastuzumab added sequentially or used concurrently with chemotherapy in resected stages I to III invasive human epidermal growth factor receptor 2–positive breast cancer. Patients and Methods Patients received doxorubicin and cyclophosphamide every 3 weeks for four cycles, followed by paclitaxel weekly for 12 weeks (arm A), paclitaxel plus sequential trastuzumab weekly for 52 weeks (arm B), or paclitaxel plus concurrent trastuzumab for 12 weeks followed by trastuzumab for 40 weeks (arm C). The primary end point was disease-free survival (DFS). Results Comparison of arm A (n = 1,087) and arm B (n = 1,097), with 6-year median follow-up and 390 events, revealed 5-year DFS rates of 71.8% and 80.1%, respectively. DFS was significantly increased with trastuzumab added sequentially to paclitaxel (log-rank P < .001; arm B/arm A hazard ratio [HR], 0.69; 95% CI, 0.57 to 0.85). Comparison of arm B (n = 954) and arm C (n = 949), with 6-year median follow-up and 313 events, revealed 5-year DFS rates of 80.1% and 84.4%, respectively. There was an increase in DFS with concurrent trastuzumab and paclitaxel relative to sequential administration (arm C/arm B HR, 0.77; 99.9% CI, 0.53 to 1.11), but the P value (.02) did not cross the prespecified O'Brien-Fleming boundary (.00116) for the interim analysis. Conclusion DFS was significantly improved with 52 weeks of trastuzumab added to adjuvant chemotherapy. On the basis of a positive risk-benefit ratio, we recommend that trastuzumab be incorporated into a concurrent regimen with taxane chemotherapy as an important standard-of-care treatment alternative to a sequential regimen. PMID:22042958
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation.
Kong, Danyang; Asplund, Christopher L; Ling, Aiqing; Chee, Michael W L
2015-08-01
Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Sleep laboratory. There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. © 2015 Associated Professional Sleep Societies, LLC.
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
Sequential dynamics in visual short-term memory.
Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B
2014-10-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.
Sequential dynamics in visual short-term memory
Conway, Andrew R. A.; Turk-Browne, Nicholas B.
2014-01-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
Mining sequential patterns for protein fold recognition.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2008-02-01
Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Automated segmentation of dental CBCT image with prior-guided sequential random forests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Polanco, Patricio M; Ding, Ying; Knox, Jordan M; Ramalingam, Lekshmi; Jones, Heather; Hogg, Melissa E; Zureikat, Amer H; Holtzman, Matthew P; Pingpank, James; Ahrendt, Steven; Zeh, Herbert J; Bartlett, David L; Choudry, Haroon A
2015-05-01
Cytoreductive surgery (CRS) with hyperthermic intraperitoneal chemoperfusion (HIPEC) is routinely used to treat certain peritoneal carcinomatoses (PC), but it can be associated with relatively high complication rates, prolonged hospital length of stay, and potential mortality. Our objective was to determine the learning curve (LC) of CRS/HIPEC in our high-volume institution. A total of 370 patients with PC from mucinous appendiceal neoplasms (MAN = 282), malignant peritoneal mesothelioma (MPM = 60), and gastric cancer (GC = 24) were studied. Outcomes analyzed included incomplete cytoreduction (IC), severe morbidity (SM), 60-day mortality, progression-free survival (PFS), and overall survival (OS). Risk-adjusted sequential probability ratio test (RA-SPRT) was employed to assess the LC of CRS/HIPEC for IC and SM using prespecified odds ratio (OR) boundaries derived from previously published data. Risk adjusted-cumulative average probability (RA-CAP) was used to analyze 1-year PFS and 2-year OS. Complete cytoreduction, severe morbidity, and 60-day mortality were 84.2, 30, and 1.9 % respectively. Higher simplified peritoneal cancer index was the major independent risk factor for IC, whereas high-grade histology, IC, and diagnosis of MPM and GC (compared with MAN) were predictors of SM after CRS/HIPEC (p < 0.05). RA-SPRT showed that approximately 180 cases are needed to achieve the lowest risk of IC and SM. Ninety cases were needed to achieve a steady 1-year PFS and 2-year OS in RA-CAP plots. The completeness of cytoreduction, morbidity, and mortality rates for CRS/HIPEC at our institution are comparable to previously reported data. Approximately 180 and 90 procedures are required to improve operative and oncologic outcomes respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramirez Aviles, Camila A.; Rao, Nageswara S.
We consider the problem of inferring the operational state of a reactor facility by using measurements from a radiation sensor network, which is deployed around the facility’s ventilation stack. The radiation emissions from the stack decay with distance, and the corresponding measurements are inherently random with parameters determined by radiation intensity levels at the sensor locations. We fuse measurements from network sensors to estimate the intensity at the stack, and use this estimate in a one-sided Sequential Probability Ratio Test (SPRT) to infer the on/off state of the reactor facility. We demonstrate the superior performance of this method over conventionalmore » majority vote fusers and individual sensors using (i) test measurements from a network of NaI sensors, and (ii) emulated measurements using radioactive effluents collected at a reactor facility stack. We analytically quantify the performance improvements of individual sensors and their networks with adaptive thresholds over those with fixed ones, by using the packing number of the radiation intensity space.« less
Quantifying the abundance of co-occurring conifers along Inland Northwest (USA) climate gradients.
Rehfeldt, Gerald E; Ferguson, Dennis E; Crookston, Nicholas L
2008-08-01
The occurrence and abundance of conifers along climate gradients in the Inland Northwest (USA) was assessed using data from 5082 field plots, 81% of which were forested. Analyses using the Random Forests classification tree revealed that the sequential distribution of species along an altitudinal gradient could be predicted with reasonable accuracy from a single climate variable, a growing-season dryness index, calculated from the ratio of degree-days >5 degrees C that accumulate in the frost-free season to the summer precipitation. While the appearance and departure of species in an ascending altitudinal sequence were closely related to the dryness index, the departure was most easily visualized in relation to negative degree-days (degree-days < 0 degrees C). The results were in close agreement with the works of descriptive ecologists. A Weibull response function was used to predict from climate variables the abundance and occurrence probabilities of each species, using binned data. The fit of the models was excellent, generally accounting for >90% of the variance among 100 classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ramirez Aviles, Camila A.
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i)more » test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.« less
Wang, Lu; Liao, Shengjin; Ruan, Yong-Ling
2013-01-01
Seed development depends on coordination among embryo, endosperm and seed coat. Endosperm undergoes nuclear division soon after fertilization, whereas embryo remains quiescent for a while. Such a developmental sequence is of great importance for proper seed development. However, the underlying mechanism remains unclear. Recent results on the cellular domain- and stage-specific expression of invertase genes in cotton and Arabidopsis revealed that cell wall invertase may positively and specifically regulate nuclear division of endosperm after fertilization, thereby playing a role in determining the sequential development of endosperm and embryo, probably through glucose signaling.
Anota, Amélie; Mouillet, Guillaume; Trouilloud, Isabelle; Dupont-Gossart, Anne-Claire; Artru, Pascal; Lecomte, Thierry; Zaanan, Aziz; Gauthier, Mélanie; Fein, Francine; Dubreuil, Olivier; Paget-Bailly, Sophie; Taieb, Julien; Bonnetain, Franck
2015-01-01
Background A randomized multicenter phase II trial was conducted to assess the sequential treatment strategy using FOLFIRI.3 and gemcitabine alternately (Arm 2) compared to gemcitabine alone (Arm 1) in patients with metastatic non pre-treated pancreatic adenocarcinoma. The primary endpoint was the progression-free survival (PFS) rate at 6 months. It concludes that the sequential treatment strategy appears to be feasible and effective with a PFS rate of 43.5% in Arm 2 at 6 months (26.1% in Arm 1). This paper reports the results of the longitudinal analysis of the health-related quality of life (HRQoL) as a secondary endpoint of this study. Methods HRQoL was evaluated using the EORTC QLQ-C30 at baseline and every two months until the end of the study or death. HRQoL deterioration-free survival (QFS) was defined as the time from randomization to a first significant deterioration as compared to the baseline score with no further significant improvement, or death. A propensity score was estimated comparing characteristics of partial and complete responders. Analyses were repeated with inverse probability weighting method using the propensity score. Multivariate Cox regression analyses were performed to identify independent factors influencing QFS. Results 98 patients were included between 2007 and 2011. Adjusting on the propensity score, patients of Arm 2 presented a longer QFS of Global Health Status (Hazard Ratio: 0.52 [0.31-0.85]), emotional functioning (0.35 [0.21–0.59]) and pain (0.50 [0.31 – 0.81]) than those of Arm 1. Conclusion Patients of Arm 2 presented a better HRQoL with a longer QFS than those of Arm 1. Moreover, the propensity score method allows to take into account the missing data depending on patients’ characteristics. Trial registration information Eudract N° 2006-005703-34. (Name of the Trial: FIRGEM). PMID:26010884
Cañas, Fernando; Pérez-Solá, Víctor; Díaz, Silvia; Rejas, Javier
2007-01-01
This study aimed to assess the cost effectiveness of ziprasidone versus haloperidol in sequential intramuscular (IM)/oral treatment of patients with exacerbation of schizophrenia in Spain. A cost-effectiveness analysis from the hospital perspective was performed. Length of stay, study medication and use of concomitant drugs were calculated using data from the ZIMO trial. The effectiveness of treatment was determined by the percentage of responders (reduction in baseline Brief Psychiatric Rating Scale [BPRS] negative symptoms subscale >or=30%). Economic assessment included estimation of mean (95% CI) total costs, cost per responder and the incremental cost-effectiveness ratio (ICER) per additional responder. The economic uncertainty level was controlled by resampling and calculation of cost-effectiveness acceptability curves. A total of 325 patients (ziprasidone n = 255, haloperidol n = 70) were included in this economic subanalysis. Ziprasidone showed a significantly higher responder rate compared with haloperidol (71% vs 56%, respectively; p = 0.023). Mean total costs were euro3582 (95% CI 3226, 3937) for ziprasidone and euro2953 (95% CI 2471, 3436) for haloperidol (p = 0.039), mainly due to a higher ziprasidone acquisition cost. However, costs per responder were lower with ziprasidone (euro5045 [95% CI 4211, 6020]) than with haloperidol (euro5302 [95% CI 3666, 7791], with a cost per additional responder (ICER) for ziprasidone of euro4095 (95% CI -130, 22 231). The acceptability curve showed an ICER cut-off value of euro13 891 at the 95% cost-effectiveness probability level for >or=30% reduction in BPRS negative symptoms. Compared with haloperidol, ziprasidone was significantly better at controlling psychotic negative symptoms in acute psychoses. The extra cost of ziprasidone was offset by a higher effectiveness rate, yielding a lower cost per responder. In light of the social benefit (less family burden and greater restoration of productivity), the incremental cost per additional responder with sequential IM/oral ziprasidone should be considered cost effective in patients with exacerbation of schizophrenia in Spain.
ERIC Educational Resources Information Center
Ashworth, Stephanie R.
2013-01-01
The study examined the relationship between secondary public school principals' emotional intelligence and school performance. The correlational study employed an explanatory sequential mixed methods model. The non-probability sample consisted of 105 secondary public school principals in Texas. The emotional intelligence characteristics of the…
Lee, Pin-Rou; Kho, Stephanie Hui Chern; Yu, Bin; Curran, Philip; Liu, Shao-Quan
2013-01-01
Summary The growth kinetics and fermentation performance of Williopsis saturnus and Saccharomyces cerevisiae at ratios of 10:1, 1:1 and 1:10 (W.:S.) were studied in papaya juice with initial 7-day fermentation by W. saturnus, followed by S. cerevisiae. The growth kinetics of W. saturnus were similar at all ratios, but its maximum cell count decreased as the proportion of S. cerevisiae was increased. Conversely, there was an early death of S. cerevisiae at the ratio of 10:1. Williopsis saturnus was the dominant yeast at 10:1 ratio that produced papaya wine with elevated concentrations of acetate esters. On the other hand, 1:1 and 1:10 ratios allowed the coexistence of both yeasts which enabled the flavour-enhancing potential of W. saturnus as well as the ethyl ester and alcohol-producing abilities of S. cerevisiae. In particular, 1:1 and 1:10 ratios resulted in production of more ethyl esters, alcohols and 2-phenylethyl acetate. However, the persistence of both yeasts at 1:1 and 1:10 ratios led to formation of high levels of acetic acid. The findings suggest that yeast ratio is a critical factor for sequential fermentation of papaya wine by W. saturnus and S. cerevisiae as a strategy to modulate papaya wine flavour. PMID:23171032
Lee, Pin-Rou; Kho, Stephanie Hui Chern; Yu, Bin; Curran, Philip; Liu, Shao-Quan
2013-07-01
The growth kinetics and fermentation performance of Williopsis saturnus and Saccharomyces cerevisiae at ratios of 10:1, 1:1 and 1:10 (W.:S.) were studied in papaya juice with initial 7-day fermentation by W.saturnus, followed by S. cerevisiae. The growth kinetics of W. saturnus were similar at all ratios, but its maximum cell count decreased as the proportion of S. cerevisiae was increased. Conversely, there was an early death of S. cerevisiae at the ratio of 10:1. Williopsis saturnus was the dominant yeast at 10:1 ratio that produced papaya wine with elevated concentrations of acetate esters. On the other hand, 1:1 and 1:10 ratios allowed the coexistence of both yeasts which enabled the flavour-enhancing potential of W.saturnus as well as the ethyl ester and alcohol-producing abilities of S. cerevisiae. In particular, 1:1 and 1:10 ratios resulted in production of more ethyl esters, alcohols and 2-phenylethyl acetate. However, the persistence of both yeasts at 1:1 and 1:10 ratios led to formation of high levels of acetic acid. The findings suggest that yeast ratio is a critical factor for sequential fermentation of papaya wine by W.saturnus and S. cerevisiae as a strategy to modulate papaya wine flavour. © 2012 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Fundamental Interactions in Gasoline Compression Ignition Engines with Fuel Stratification
NASA Astrophysics Data System (ADS)
Wolk, Benjamin Matthew
Transportation accounted for 28% of the total U.S. energy demand in 2011, with 93% of U.S. transportation energy coming from petroleum. The large impact of the transportation sector on global climate change necessitates more-efficient, cleaner-burning internal combustion engine operating strategies. One such strategy that has received substantial research attention in the last decade is Homogeneous Charge Compression Ignition (HCCI). Although the efficiency and emissions benefits of HCCI are well established, practical limits on the operating range of HCCI engines have inhibited their application in consumer vehicles. One such limit is at high load, where the pressure rise rate in the combustion chamber becomes excessively large. Fuel stratification is a potential strategy for reducing the maximum pressure rise rate in HCCI engines. The aim is to introduce reactivity gradients through fuel stratification to promote sequential auto-ignition rather than a bulk-ignition, as in the homogeneous case. A gasoline-fueled compression ignition engine with fuel stratification is termed a Gasoline Compression Ignition (GCI) engine. Although a reasonable amount of experimental research has been performed for fuel stratification in GCI engines, a clear understanding of how the fundamental in-cylinder processes of fuel spray evaporation, mixing, and heat release contribute to the observed phenomena is lacking. Of particular interest is gasoline's pressure sensitive low-temperature chemistry and how it impacts the sequential auto-ignition of the stratified charge. In order to computationally study GCI with fuel stratification using three-dimensional computational fluid dynamics (CFD) and chemical kinetics, two reduced mechanisms have been developed. The reduced mechanisms were developed from a large, detailed mechanism with about 1400 species for a 4-component gasoline surrogate. The two versions of the reduced mechanism developed in this work are: (1) a 96-species version and (2) a 98-species version including nitric oxide formation reactions. Development of reduced mechanisms is necessary because the detailed mechanism is computationally prohibitive in three-dimensional CFD and chemical kinetics simulations. Simulations of Partial Fuel Stratification (PFS), a GCI strategy, have been performed using CONVERGE with the 96-species reduced mechanism developed in this work for a 4-component gasoline surrogate. Comparison is made to experimental data from the Sandia HCCI/GCI engine at a compression ratio 14:1 at intake pressures of 1 bar and 2 bar. Analysis of the heat release and temperature in the different equivalence ratio regions reveals that sequential auto-ignition of the stratified charge occurs in order of increasing equivalence ratio for 1 bar intake pressure and in order of decreasing equivalence ratio for 2 bar intake pressure. Increased low- and intermediate-temperature heat release with increasing equivalence ratio at 2 bar intake pressure compensates for decreased temperatures in higher-equivalence ratio regions due to evaporative cooling from the liquid fuel spray and decreased compression heating from lower values of the ratio of specific heats. The presence of low- and intermediate-temperature heat release at 2 bar intake pressure alters the temperature distribution of the mixture stratification before hot-ignition, promoting the desired sequential auto-ignition. At 1 bar intake pressure, the sequential auto-ignition occurs in the reverse order compared to 2 bar intake pressure and too fast for useful reduction of the maximum pressure rise rate compared to HCCI. Additionally, the premixed portion of the charge auto-ignites before the highest-equivalence ratio regions. Conversely, at 2 bar intake pressure, the premixed portion of the charge auto-ignites last, after the higher-equivalence ratio regions. More importantly, the sequential auto-ignition occurs over a longer time period for 2 bar intake pressure than at 1 bar intake pressure such that a sizable reduction in the maximum pressure rise rate compared to HCCI can be achieved.
Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation
NASA Astrophysics Data System (ADS)
Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.
2018-03-01
A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geard, C.R.
1983-01-01
In root meristems of Tradescantia clone 02 (developed by Sparrow and his colleagues for mutation studies), X-rays interfere with the progression of cells through the cell cycle and induce chromosomal aberrations in a dose-dependent manner consistent with linear-quadratic kinetics. Sequential mitotic cell accumulations after irradiation indicate that sensitivity to aberrration induction is probably greatest in cells from late S to early G2, with chromatid interchanges the most frequent aberration type and all aberrations consistent with intiation from the interaction between two lesions. The ratio of the coefficients in the linear (..cap alpha..) and the quadratic (..beta..) terms (..cap alpha../..beta..) ismore » equal to the dose average of specific energy produced by individual particles in the site where interaction takes place. The ratio ..cap alpha../..beta.. for chromosomal aberrations is similar to that previously found for X-ray-induced mutation in Tradescantia stamen hairs, supporting the proposal that radiation-induced mutational events are due to chromosomal aberrations with interaction distances of about 1 ..mu..m. Abrahmson and co-workers have noted that both ..cap alpha../..beta.. ratios appear to be related to nuclear target size and are similar for chromosomal and mutational endpoints in the same organism. These findings support this concept; however, it is apparent that any situation which diminishes yield at high doses (e.g., mitotic delay) will primarily affect the ..beta.. component, resulting in low assessments of interaction site diameters.« less
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
ERIC Educational Resources Information Center
LaSota, Robin Rae
2013-01-01
My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
2012-01-01
Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
Language experience changes subsequent learning
Onnis, Luca; Thiessen, Erik
2013-01-01
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510
Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi
2011-06-01
To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Sequential measurement of conjugate variables as an alternative quantum state tomography.
Di Lorenzo, Antonio
2013-01-04
It is shown how it is possible to reconstruct the initial state of a one-dimensional system by sequentially measuring two conjugate variables. The procedure relies on the quasicharacteristic function, the Fourier transform of the Wigner quasiprobability. The proper characteristic function obtained by Fourier transforming the experimentally accessible joint probability of observing "position" then "momentum" (or vice versa) can be expressed as a product of the quasicharacteristic function of the two detectors and that unknown of the quantum system. This allows state reconstruction through the sequence (1) data collection, (2) Fourier transform, (3) algebraic operation, and (4) inverse Fourier transform. The strength of the measurement should be intermediate for the procedure to work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shusharina, N; Khan, F; Sharp, G
Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less
Hassan, Wafaa S; Elmasry, Manal S; Elsayed, Heba M; Zidan, Dalia W
2018-09-05
In accordance with International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines, six novel, simple and precise sequential spectrophotometric methods were developed and validated for the simultaneous analysis of Ribavirin (RIB), Sofosbuvir (SOF), and Daclatasvir (DAC) in their mixture without prior separation steps. These drugs are described as co-administered for treatment of Hepatitis C virus (HCV). HCV is the cause of hepatitis C and some cancers such as liver cancer (hepatocellular carcinoma) and lymphomas in humans. These techniques consisted of several sequential steps using zero, ratio and/or derivative spectra. DAC was first determined through direct spectrophotometry at 313.7 nm without any interference of the other two drugs while RIB and SOF can be determined after ratio subtraction through five methods; Ratio difference spectrophotometric method, successive derivative ratio method, constant center, isoabsorptive method at 238.8 nm, and mean centering of the ratio spectra (MCR) at 224 nm and 258 nm for RIB and SOF, respectively. The calibration curve is linear over the concentration ranges of (6-42), (10-70) and (4-16) μg/mL for RIB, SOF, and DAC, respectively. This method was successfully applied to commercial pharmaceutical preparation of the drugs, spiked human urine, and spiked human plasma. The above methods are very simple methods that were developed for the simultaneous determination of binary and ternary mixtures and so enhance signal-to-noise ratio. The method has been successfully applied to the simultaneous analysis of RIB, SOF, and DAC in laboratory prepared mixtures. The obtained results are statistically compared with those obtained by the official or reported methods, showing no significant difference with respect to accuracy and precision at p = 0.05. Copyright © 2018 Elsevier B.V. All rights reserved.
Risk-sensitive reinforcement learning.
Shen, Yun; Tobia, Michael J; Sommer, Tobias; Obermayer, Klaus
2014-07-01
We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979 ), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.
Simulation Model of Mobile Detection Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmunds, T; Faissol, D; Yao, Y
2009-01-27
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped withmore » 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains a constant range to the vessel being inspected. Finally, a variation of the sequential probability ratio test that is more appropriate when source and detector are not at constant range is available [Nelson 2005]. Each patrol boat in the fleet can be assigned a particular zone of the bay, or all boats can be assigned to monitor the entire bay. Boats assigned to a zone will only intercept and inspect other boats when they enter their zone. In our example simulation, each of two patrol boats operate in a 5 km by 5 km zone. Other parameters for this example include: (1) Detection range - 15 m range maintained between patrol boat and inspected boat; (2) Inbound boat arrival rate - Poisson process with mean arrival rate of 30 boats per hour; (3) Speed of boats to be inspected - Random between 4.5 and 9 knots; (4) Patrol boat speed - 10 knots; (5) Number of detectors per patrol boat - 4-2-inch x 4-inch x 16-inch NaI detectors; (6) Background radiation - 40 counts/sec per detector; and (7) Detector response due to radiation source at 1 meter - 1,589 counts/sec per detector. Simulation results indicate that two patrol boats are able to detect the source 81% of the time without zones and 90% of the time with zones. The average distances between the source and target at the end of the simulation is 5,866 km and 5,712 km for non-zoned and zoned patrols, respectively. Of those that did not reach the target, the average distance to the target is 7,305 km and 6,441 km respectively. Note that a design trade-off exists. While zoned patrols provide a higher probability of detection, the nonzoned patrols tend to detect the source farther from its target. Figure 1 displays the location of the source at the end of 1,000 simulations for the 5 x 10 km bay simulation. The simulation model and analysis described here can be used to determine the number of mobile detectors one would need to deploy in order to have a have reasonable chance of detecting a source in transit. By fixing the source speed to zero, the same model could be used to estimate how long it would take to detect a stationary source. For example, the model could predict how long it would take plant staff performing assigned duties carrying dosimeters to discover a contaminated spot in the facility.« less
Mean-field crack networks on desiccated films and their applications: Girl with a Pearl Earring.
Flores, J C
2017-02-15
Usual requirements for bulk and fissure energies are considered in obtaining the interdependence among external stress, thickness and area of crack polygons in desiccated films. The average area of crack polygons increases with thickness as a power-law of 4/3. The sequential fragmentation process is characterized by a topological factor related to a scaling finite procedure. Non-sequential overly tensioned (prompt) fragmentation is briefly discussed. Vermeer's painting, Girl with a Pearl Earring, is considered explicitly by using computational image tools and simple experiments and applying the proposed theoretical analysis. In particular, concerning the source of lightened effects on the girl's face, the left/right thickness layer ratio (≈1.34) and the stress ratio (≈1.102) are evaluated. Other master paintings are briefly considered.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
Syndrome Surveillance Using Parametric Space-Time Clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we havemore » extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.« less
Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation
Kong, Danyang; Asplund, Christopher L.; Ling, Aiqing; Chee, Michael W.L.
2015-01-01
Study Objectives: Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Design: Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Setting: Sleep laboratory. Participants: There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Measurements and Results: Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. Conclusions: The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. Citation: Kong D, Asplund CL, Ling A, Chee MWL. Increased automaticity and altered temporal preparation following sleep deprivation. SLEEP 2015;38(8):1219–1227. PMID:25845689
Sequential Analysis of Mastery Behavior in 6- and 12-Month-Old Infants.
ERIC Educational Resources Information Center
MacTurk, Robert H.; And Others
1987-01-01
Sequences of mastery behavior were analyzed in a sample of 67 infants 6 to 12 months old. Authors computed (a) frequencies of six categories of mastery behavior, transitional probabilities, and z scores for each behavior change, and (b) transitions from a mastery behavior to positive affect. Changes in frequencies and similarity in organization…
Importance and Effectiveness of Student Health Services at a South Texas University
ERIC Educational Resources Information Center
McCaig, Marilyn M.
2013-01-01
The study examined the health needs of students at a south Texas university and documented the utility of the student health center. The descriptive study employed a mixed methods explanatory sequential design (ESD). The non-probability sample consisted of 140 students who utilized the university's health center during the period of March 23-30,…
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
Lang, Brian H H; Woo, Yu-Cho; Chiu, Keith Wan-Hang
2018-03-19
Assessing the efficacy and safety of sequential high-intensity focused ultrasound (HIFU) ablation in a multinodular goitre (MNG) by comparing them with single HIFU ablation. One hundred and four (84.6%) patients underwent single ablation of a single nodule (group I), while 19 (15.4%) underwent sequential ablation of two relatively-dominant nodules in a MNG (group II). Extent of shrinkage per nodule [by volume reduction ratio (VRR)], pain scores (by 0-10 visual analogue scale) during and after ablation, and rate of vocal cord palsy (VCP), skin burn and nausea/vomiting were compared between the two groups. All 19 (100%) sequential ablations completed successfully. The 3- and 6-month VRR of each nodule were comparable between the two groups (p > 0.05) and in group II, the 3- and 6-month VRR between the first and second nodules were comparable (p = 0.710 and p = 0.548, respectively). Pain score was significantly higher in group II in the morning after ablation (2.29 vs 1.15, p = 0.047) and nausea/vomiting occurred significantly more frequently in group II (15.8% vs 0.0%, p = 0.012). However, VCP and skin burn were comparable (p > 0.05). Sequential ablation had comparable efficacy and safety as single ablation. However, patients undergoing sequential ablation are at higher likelihood of pain in the following morning and nausea/vomiting after ablation. • Sequential HIFU ablation is well-tolerated in patients with two dominant thyroid nodules • More pain is experienced in the morning following sequential HIFU ablation • More nausea/vomiting is experienced following sequential HIFU ablation.
Bertoldi, Eduardo G; Stella, Steffen F; Rohde, Luis Eduardo P; Polanczyk, Carisi A
2017-05-04
The aim of this research is to evaluate the relative cost-effectiveness of functional and anatomical strategies for diagnosing stable coronary artery disease (CAD), using exercise (Ex)-ECG, stress echocardiogram (ECHO), single-photon emission CT (SPECT), coronary CT angiography (CTA) or stress cardiacmagnetic resonance (C-MRI). Decision-analytical model, comparing strategies of sequential tests for evaluating patients with possible stable angina in low, intermediate and high pretest probability of CAD, from the perspective of a developing nation's public healthcare system. Hypothetical cohort of patients with pretest probability of CAD between 20% and 70%. The primary outcome is cost per correct diagnosis of CAD. Proportion of false-positive or false-negative tests and number of unnecessary tests performed were also evaluated. Strategies using Ex-ECG as initial test were the least costly alternatives but generated more frequent false-positive initial tests and false-negative final diagnosis. Strategies based on CTA or ECHO as initial test were the most attractive and resulted in similar cost-effectiveness ratios (I$ 286 and I$ 305 per correct diagnosis, respectively). A strategy based on C-MRI was highly effective for diagnosing stable CAD, but its high cost resulted in unfavourable incremental cost-effectiveness (ICER) in moderate-risk and high-risk scenarios. Non-invasive strategies based on SPECT have been dominated. An anatomical diagnostic strategy based on CTA is a cost-effective option for CAD diagnosis. Functional strategies performed equally well when based on ECHO. C-MRI yielded acceptable ICER only at low pretest probability, and SPECT was not cost-effective in our analysis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Penning, Sophie; Chase, J Geoffrey; Preiser, Jean-Charles; Pretty, Christopher G; Signal, Matthew; Mélot, Christian; Desaive, Thomas
2014-06-01
This research evaluates the impact of the achievement of an intermediate target glycemic band on the severity of organ failure and mortality. Daily Sequential Organ Failure Assessment (SOFA) score and the cumulative time in a 4.0 to 7.0 mmol/L band (cTIB) were evaluated daily up to 14 days in 704 participants of the multicentre Glucontrol trial (16 centers) that randomized patients to intensive group A (blood glucose [BG] target: 4.4-6.1 mmol/L) or conventional group B (BG target: 7.8-10.0 mmol/L). Sequential Organ Failure Assessment evolution was measured by percentage of patients with SOFA less than or equal to 5 on each day, percentage of individual organ failures, and percentage of organ failure-free days. Conditional and joint probability analysis of SOFA and cTIB 0.5 or more assessed the impact of achieving 4.0 to 7.0 mmol/L target glycemic range on organ failure. Odds ratios (OR) compare the odds risk of death for cTIB 0.5 or more vs cTIB less than 0.5, where a ratio greater than 1.0 indicates an improvement for achieving cTIB 0.5 or more independent of SOFA or glycemic target. Groups A and B were matched for demographic and severity of illness data. Blood glucose differed between groups A and B (P<.05), as expected. There was no difference in the percentage of patients with SOFA less than or equal to 5, individual organ failures, and organ failure-free days between groups A and B over days 1 to 14. However, 20% to 30% of group A patients failed to achieve cTIB 0.5 or more for all days, and significant crossover confounds interpretation. Mortality OR was greater than 1.0 for patients with cTIB 0.5 or more in both groups but much higher for group A on all days. There was no difference in organ failure in the Glucontrol study based on intention to treat to different glycemic targets. Actual outcomes and significant crossover indicate that this result may not be due to the difference in target or treatment. Odds ratios-associated achieving an intermediate 4.0 to 7.0 mmol/L range improved outcome. Copyright © 2014 Elsevier Inc. All rights reserved.
Tables of stark level transition probabilities and branching ratios in hydrogen-like atoms
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
The transition probabilities which are given in terms of n prime k prime and n k are tabulated. No additional summing or averaging is necessary. The electric quantum number k plays the role of the angular momentum quantum number l in the presence of an electric field. The branching ratios between stark levels are also tabulated. Necessary formulas for the transition probabilities and branching ratios are given. Symmetries are discussed and selection rules are given. Some disagreements for some branching ratios are found between the present calculation and the measurement of Mark and Wierl. The transition probability multiplied by the statistical weight of the initial state is called the static intensity J sub S, while the branching ratios are called the dynamic intensity J sub D.
Lin, Carol Y; Li, Ling
2016-11-07
HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
The stellar content of LH 9 and 10 (N11) in the LMC - A case for sequential star formation
NASA Technical Reports Server (NTRS)
Parker, Joel WM.; Garmany, Catharine D.; Massey, Philip; Walborn, Nolan R.
1992-01-01
The young OB associations Lucke-Hodge 9 and 10 are studied with UBV photometry that is independent of reddening to determine the IMF directly from star counts. The temperature and reddening of the stars are determined which, in conjunction with the spectroscopic classification of the earliest stars, is employed to place the stellar groups on the theoretical H-R diagram. Observations are also presented of the highly compact H II region/knot N11A and the multiple system HD 32228, and LH 9 and 10 are compared. The Lyman ionizing flux calculated at 4.9-7.2 x 10 exp 50/s agrees well with flux required to generate the H-alpha luminosity of the H II region. LH 10 has a much flatter slope, a higher ratio of higher-mass to lower-mass stars, and greater reddening than LH 9, and LH 10 contains all of the O stars earlier than O6. It is concluded that LH 9 is older than LH 10 and probably contributed to the initiation of star formation in LH 10.
Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization
Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin
2017-01-01
In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935
How temporal cues can aid colour constancy
Foster, David H.; Amano, Kinjiro; Nascimento, Sérgio M. C.
2007-01-01
Colour constancy assessed by asymmetric simultaneous colour matching usually reveals limited levels of performance in the unadapted eye. Yet observers can readily discriminate illuminant changes on a scene from changes in the spectral reflectances of the surfaces making up the scene. This ability is probably based on judgements of relational colour constancy, in turn based on the physical stability of spatial ratios of cone excitations under illuminant changes. Evidence is presented suggesting that the ability to detect violations in relational colour constancy depends on temporal transient cues. Because colour constancy and relational colour constancy are closely connected, it should be possible to improve estimates of colour constancy by introducing similar transient cues into the matching task. To test this hypothesis, an experiment was performed in which observers made surface-colour matches between patterns presented in the same position in an alternating sequence with period 2 s or, as a control, presented simultaneously, side-by-side. The degree of constancy was significantly higher for sequential presentation, reaching 87% for matches averaged over 20 observers. Temporal cues may offer a useful source of information for making colour-constancy judgements. PMID:17515948
Multi-alternative decision-making with non-stationary inputs.
Nunes, Luana F; Gurney, Kevin
2016-08-01
One of the most widely implemented models for multi-alternative decision-making is the multihypothesis sequential probability ratio test (MSPRT). It is asymptotically optimal, straightforward to implement, and has found application in modelling biological decision-making. However, the MSPRT is limited in application to discrete ('trial-based'), non-time-varying scenarios. By contrast, real world situations will be continuous and entail stimulus non-stationarity. In these circumstances, decision-making mechanisms (like the MSPRT) which work by accumulating evidence, must be able to discard outdated evidence which becomes progressively irrelevant. To address this issue, we introduce a new decision mechanism by augmenting the MSPRT with a rectangular integration window and a transparent decision boundary. This allows selection and de-selection of options as their evidence changes dynamically. Performance was enhanced by adapting the window size to problem difficulty. Further, we present an alternative windowing method which exponentially decays evidence and does not significantly degrade performance, while greatly reducing the memory resources necessary. The methods presented have proven successful at allowing for the MSPRT algorithm to function in a non-stationary environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Man Gyun; Oh, Seungrohk
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce themore » time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.« less
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
NASA Astrophysics Data System (ADS)
Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.
2010-03-01
We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.
Hinsin, Duangduean; Pdungsap, Laddawan; Shiowatana, Juwadee
2002-12-06
A continuous-flow extraction system originally developed for sequential extraction was applied to study elemental association of a synthetic metal-doped amorphous iron hydroxide phase. The homogeneity and metal association of the precipitates were evaluated by gradual leaching using the system. Leachate was collected in fractions for determination of elemental concentrations. The result obtained as extractograms indicated that the doped metals were adsorbed more on the outermost surface rather than homogeneously distributed in the precipitates. The continuous-flow extraction method was also used for effective removal of surface adsorbed metals to obtain a homogeneous metal-doped synthetic iron hydroxide by a sequential extraction using acetic acid and small volume of hydroxylamine hydrochloride solution. The system not only ensures complete washing, but the extent of metal immobilization in the synthetic iron hydroxide could be determined with high accuracy from the extractograms. The initial metal/iron mole ratio (M/Fe) in solution affected the M/Fe mole ratio in homogeneous doped iron hydroxide phase. The M/Fe mole ratio of metal incorporation was approximately 0.01-0.02 and 0.03-0.06, for initial solution M/Fe mole ratio of 0.025 and 0.100, respectively.
Water reuse in the l-lysine fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsiao, T.Y.; Glatz, C.E.
1996-02-05
L-Lysine is produced commercially by fermentation. As is typical for fermentation processes, a large amount of liquid waste is generated. To minimize the waste, which is mostly the broth effluent from the cation exchange column used for l-lysine recovery, the authors investigated a strategy of recycling a large fraction of this broth effluent to the subsequent fermentation. This was done on a lab-scale process with Corynebacterium glutamicum ATCC 21253 as the l-lysine-producing organisms. Broth effluent from a fermentation in a defined medium was able to replace 75% of the water for the subsequent batch; this recycle ratio was maintained formore » 3 sequential batches without affecting cell mass and l-lysine production. Broth effluent was recycled at 50% recycle ratio in a fermentation in a complex medium containing beet molasses. The first recycle batch had an 8% lower final l-lysine level, but 8% higher maximum cell mass. In addition to reducing the volume of liquid waste, this recycle strategy has the additional advantage of utilizing the ammonium desorbed from the ion-exchange column as a nitrogen source in the recycle fermentation. The major problem of recycling the effluent from the complex medium was in the cation-exchange operation, where column capacity was 17% lower for the recycle batch. The loss of column capacity probably results from the buildup of cations competing with l-lysine for binding.« less
A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.
Yao, Libo; Liu, Yong; He, You
2018-06-22
The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2013-01-01
Recent observations show evidence that high-z (z ˜ 2-3) early-type galaxies (ETGs) are more compact than those with comparable mass at z ˜ 0. Such size evolution is most likely explained by the `dry merger sceanario'. However, previous studies based on this scenario cannot consistently explain the properties of both high-z compact massive ETGs and local ETGs. We investigate the effect of multiple sequential dry minor mergers on the size evolution of compact massive ETGs. From an analysis of the Millennium Simulation Data Base, we show that such minor (stellar mass ratio M2/M1 < 1/4) mergers are extremely common during hierarchical structure formation. We perform N-body simulations of sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. Typical mass ratios of these minor mergers are 1/20 < M2/M1 ≤q 1/10. We show that sequential minor mergers of compact satellite galaxies are the most efficient at promoting size growth and decreasing the velocity dispersion of compact massive ETGs in our simulations. The change of stellar size and density of the merger remnants is consistent with recent observations. Furthermore, we construct the merger histories of candidates for high-z compact massive ETGs using the Millennium Simulation Data Base and estimate the size growth of the galaxies through the dry minor merger scenario. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained during sequential minor mergers in our simulations. However, we note that our numerical result is only valid for merger histories with typical mass ratios between 1/20 and 1/10 with parabolic and head-on orbits and that our most efficient size-growth efficiency is likely an upper limit.
ERIC Educational Resources Information Center
Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick
2018-01-01
When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…
ERIC Educational Resources Information Center
Adolph, Stephen C.
2007-01-01
I describe a group exercise that I give to my undergraduate biostatistics class. The exercise involves analyzing a series of 200 consecutive basketball free-throw attempts to determine whether there is any evidence for sequential dependence in the probability of making a free-throw. The students are given the exercise before they have learned the…
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Language experience changes subsequent learning.
Onnis, Luca; Thiessen, Erik
2013-02-01
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.
A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics
2007-05-01
findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each
GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.
Whitehead, John; Horby, Peter
2017-03-01
Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cummings, Jeffrey L; Lyketsos, Constantine G; Peskind, Elaine R; Porsteinsson, Anton P; Mintzer, Jacobo E; Scharre, Douglas W; De La Gandara, Jose E; Agronin, Marc; Davis, Charles S; Nguyen, Uyen; Shin, Paul; Tariot, Pierre N; Siffert, João
Agitation is common among patients with Alzheimer disease; safe, effective treatments are lacking. To assess the efficacy, safety, and tolerability of dextromethorphan hydrobromide-quinidine sulfate for Alzheimer disease-related agitation. Phase 2 randomized, multicenter, double-blind, placebo-controlled trial using a sequential parallel comparison design with 2 consecutive 5-week treatment stages conducted August 2012-August 2014. Patients with probable Alzheimer disease, clinically significant agitation (Clinical Global Impressions-Severity agitation score ≥4), and a Mini-Mental State Examination score of 8 to 28 participated at 42 US study sites. Stable dosages of antidepressants, antipsychotics, hypnotics, and antidementia medications were allowed. In stage 1, 220 patients were randomized in a 3:4 ratio to receive dextromethorphan-quinidine (n = 93) or placebo (n = 127). In stage 2, patients receiving dextromethorphan-quinidine continued; those receiving placebo were stratified by response and rerandomized in a 1:1 ratio to dextromethorphan-quinidine (n = 59) or placebo (n = 60). The primary end point was change from baseline on the Neuropsychiatric Inventory (NPI) Agitation/Aggression domain (scale range, 0 [absence of symptoms] to 12 [symptoms occur daily and with marked severity]). A total of 194 patients (88.2%) completed the study. With the sequential parallel comparison design, 152 patients received dextromethorphan-quinidine and 127 received placebo during the study. Analysis combining stages 1 (all patients) and 2 (rerandomized placebo nonresponders) showed significantly reduced NPI Agitation/Aggression scores for dextromethorphan-quinidine vs placebo (ordinary least squares z statistic, -3.95; P < .001). In stage 1, mean NPI Agitation/Aggression scores were reduced from 7.1 to 3.8 with dextromethorphan-quinidine and from 7.0 to 5.3 with placebo. Between-group treatment differences were significant in stage 1 (least squares mean, -1.5; 95% CI, -2.3 to -0.7; P<.001). In stage 2, NPI Agitation/Aggression scores were reduced from 5.8 to 3.8 with dextromethorphan-quinidine and from 6.7 to 5.8 with placebo. Between-group treatment differences were also significant in stage 2 (least squares mean, -1.6; 95% CI, -2.9 to -0.3; P=.02). Adverse events included falls (8.6% for dextromethorphan-quinidine vs 3.9% for placebo), diarrhea (5.9% vs 3.1% respectively), and urinary tract infection (5.3% vs 3.9% respectively). Serious adverse events occurred in 7.9% with dextromethorphan-quinidine vs 4.7% with placebo. Dextromethorphan-quinidine was not associated with cognitive impairment, sedation, or clinically significant QTc prolongation. In this preliminary 10-week phase 2 randomized clinical trial of patients with probable Alzheimer disease, combination dextromethorphan-quinidine demonstrated clinically relevant efficacy for agitation and was generally well tolerated. clinicaltrials.gov Identifier: NCT01584440.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Potential for leaching of arsenic from excavated rock after different drying treatments.
Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki
2016-07-01
Leaching of arsenic (As) from excavated rock subjected to different drying methods is compared using sequential leaching tests and rapid small-scale column tests combined with a sequential extraction procedure. Although the total As content in the rock was low (8.81 mg kg(-1)), its resulting concentration in the leachate when leached at a liquid-to-solid ratio of 10 L kg(-1) exceeded the environmental standard (10 μg L(-1)). As existed mainly in dissolved forms in the leachates. All of the drying procedures applied in this study increased the leaching of As, with freeze-drying leading to the largest increase. Water extraction of As using the two tests showed different leaching behaviors as a function of the liquid-to-solid ratio, and achieved average extractions of up to 35.7% and 25.8% total As, respectively. Dissolution of As from the mineral surfaces and subsequent re-adsorption controlled the short-term release of As; dissolution of Fe, Al, and dissolved organic carbon played important roles in long-term As leaching. Results of the sequential extraction procedure showed that use of 0.05 M (NH4)2SO4 underestimates the readily soluble As. Long-term water extraction removed almost all of the non-specifically sorbed As and most of the specifically sorbed As. The concept of pollution potential indices, which are easily determined by the sequential leaching test, is proposed in this study and is considered for possible use in assessing efficacy of treatment of excavated rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...
Jarmolowicz, David P; Sofis, Michael J; Darden, Alexandria C
2016-07-01
Although progressive ratio (PR) schedules have been used to explore effects of a range of reinforcer parameters (e.g., magnitude, delay), effects of reinforcer probability remain underexplored. The present project used independently progressing concurrent PR PR schedules to examine effects of reinforcer probability on PR breakpoint (highest completed ratio prior to a session terminating 300s pause) and response allocation. The probability of reinforcement on one lever remained at 100% across all conditions while the probability of reinforcement on the other lever was systematically manipulated (i.e., 100%, 50%, 25%, 12.5%, and a replication of 25%). Breakpoints systematically decreased with decreasing reinforcer probabilities while breakpoints on the control lever remained unchanged. Patterns of switching between the two levers were well described by a choice-by-choice unit price model that accounted for the hyperbolic discounting of the value of probabilistic reinforcers. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Longbotham, Pamela J.
2012-01-01
The study examined the impact of participation in an optional flexible year program (OFYP) on academic achievement. The ex post facto study employed an explanatory sequential mixed methods design. The non-probability sample consisted of 163 fifth grade students in an OFYP district and 137 5th graders in a 180-day instructional year school…
The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems
NASA Astrophysics Data System (ADS)
Granmo, Ole-Christoffer
The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.
Non-linear properties of metallic cellular materials with a negative Poisson's ratio
NASA Technical Reports Server (NTRS)
Choi, J. B.; Lakes, R. S.
1992-01-01
Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.
Modulating surface rheology by electrostatic protein/polysaccharide interactions.
Ganzevles, Renate A; Zinoviadou, Kyriaki; van Vliet, Ton; Cohen, Martien A; de Jongh, Harmen H
2006-11-21
There is a large interest in mixed protein/polysaccharide layers at air-water and oil-water interfaces because of their ability to stabilize foams and emulsions. Mixed protein/polysaccharide adsorbed layers at air-water interfaces can be prepared either by adsorption of soluble protein/polysaccharide complexes or by sequential adsorption of complexes or polysaccharides to a previously formed protein layer. Even though the final protein and polysaccharide bulk concentrations are the same, the behavior of the adsorbed layers can be very different, depending on the method of preparation. The surface shear modulus of a sequentially formed beta-lactoglobulin/pectin layer can be up to a factor of 6 higher than that of a layer made by simultaneous adsorption. Furthermore, the surface dilatational modulus and surface shear modulus strongly (up to factors of 2 and 7, respectively) depend on the bulk -lactoglobulin/pectin mixing ratio. On the basis of the surface rheological behavior, a mechanistic understanding of how the structure of the adsorbed layers depends on the protein/polysaccharide interaction in bulk solution, mixing ratio, ionic strength, and order of adsorption to the interface (simultaneous or sequential) is derived. Insight into the effect of protein/polysaccharide interactions on the properties of adsorbed layers provides a solid basis to modulate surface rheological behavior.
Sequential quantum secret sharing in a noisy environment aided with weak measurements
NASA Astrophysics Data System (ADS)
Ray, Maharshi; Chatterjee, Sourav; Chakrabarty, Indranil
2016-05-01
In this work we give a (n,n)-threshold protocol for sequential secret sharing of quantum information for the first time. By sequential secret sharing we refer to a situation where the dealer is not having all the secrets at the same time, at the beginning of the protocol; however if the dealer wishes to share secrets at subsequent phases she/he can realize it with the help of our protocol. First of all we present our protocol for three parties and later we generalize it for the situation where we have more (n> 3) parties. Interestingly, we show that our protocol of sequential secret sharing requires less amount of quantum as well as classical resource as compared to the situation wherein existing protocols are repeatedly used. Further in a much more realistic situation, we consider the sharing of qubits through two kinds of noisy channels, namely the phase damping channel (PDC) and the amplitude damping channel (ADC). When we carry out the sequential secret sharing in the presence of noise we observe that the fidelity of secret sharing at the kth iteration is independent of the effect of noise at the (k - 1)th iteration. In case of ADC we have seen that the average fidelity of secret sharing drops down to ½ which is equivalent to a random guess of the quantum secret. Interestingly, we find that by applying weak measurements one can enhance the average fidelity. This increase of the average fidelity can be achieved with certain trade off with the success probability of the weak measurements.
Impact of a Sequential Intervention on Albumin Utilization in Critical Care.
Lyu, Peter F; Hockenberry, Jason M; Gaydos, Laura M; Howard, David H; Buchman, Timothy G; Murphy, David J
2016-07-01
Literature generally finds no advantages in mortality risk for albumin over cheaper alternatives in many settings. Few studies have combined financial and nonfinancial strategies to reduce albumin overuse. We evaluated the effect of a sequential multifaceted intervention on decreasing albumin use in ICU and explore the effects of different strategies. Prospective prepost cohort study. Eight ICUs at two hospitals in an academic healthcare system. Adult patients admitted to study ICUs from September 2011 to August 2014 (n = 22,004). Over 2 years, providers in study ICUs participated in an intervention to reduce albumin use involving monthly feedback and explicit financial incentives in the first year and internal guidelines and order process changes in the second year. Outcomes measured were albumin orders per ICU admission, direct albumin costs, and mortality. Mean (SD) utilization decreased 37% from 2.7 orders (6.8) per admission during the baseline to 1.7 orders (4.6) during the intervention (p < 0.001). Regression analysis revealed that the intervention was independently associated with 0.9 fewer orders per admission, a 42% relative decrease. This adjusted effect consisted of an 18% reduction in the probability of using any albumin (p < 0.001) and a 29% reduction in the number of orders per admission among patients receiving any (p < 0.001). Secondary analysis revealed that probability reductions were concurrent with internal guidelines and order process modification while reductions in quantity occurred largely during the financial incentives and feedback period. Estimated cost savings totaled $2.5M during the 2-year intervention. There was no significant difference in ICU or hospital mortality between baseline and intervention. A sequential intervention achieved significant reductions in ICU albumin use and cost savings without changes in patient outcomes, supporting the combination of financial and nonfinancial strategies to align providers with evidence-based practices.
St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.
2011-01-01
We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379
NASA Technical Reports Server (NTRS)
Sellen, J. M., Jr.; Kemp, R. F.; Hall, D. F.
1973-01-01
Doubly to singly charged mercury ion ratios in electron bombardment ion thruster exhaust beams have been determined as functions of bombardment discharge potential, thrust beam current, thrust beam radial position, acceleration-deceleration voltage ratio, and propellant utilization fraction. A mathematical model for two-step ionization processes has been derived, and calculated ion ratios are compared to observed ratios. Production of Hg(++) appears to result primarily from sequential ionization of Hg(+) in the discharge. Experimental and analytical results are presented, and design, construction, and operation features of an electrostatic deflection ion time-of-flight analyzer for the determination of the above-mentioned ratios are reviewed.
Considering User's Access Pattern in Multimedia File Systems
NASA Astrophysics Data System (ADS)
Cho, KyoungWoon; Ryu, YeonSeung; Won, Youjip; Koh, Kern
2002-12-01
Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property.
Violation of the Wiedemann-Franz law in a single-electron transistor.
Kubala, Björn; König, Jürgen; Pekola, Jukka
2008-02-15
We study the influence of Coulomb interaction on the thermoelectric transport coefficients for a metallic single-electron transistor. By performing a perturbation expansion up to second order in the tunnel-barrier conductance, we include sequential and cotunneling processes as well as quantum fluctuations that renormalize the charging energy and the tunnel conductance. We find that Coulomb interaction leads to a strong violation of the Wiedemann-Franz law: the Lorenz ratio becomes gate-voltage dependent for sequential tunneling, and is increased by a factor 9/5 in the cotunneling regime. Finally, we suggest a measurement scheme for an experimental realization.
Aydogdu, Ibrahim; Tanriverdi, Zeynep; Ertekin, Cumhur
2011-06-01
The aim of this study is to investigate a probable dysfunction of the central pattern generator (CPG) in dysphagic patients with ALS. We investigated 58 patients with ALS, 23 patients with PD, and 33 normal subjects. The laryngeal movements and EMG of the submental muscles were recorded during sequential water swallowing (SWS) of 100ml of water. The coordination of SWS and respiration was also studied in some normal cases and ALS patients. Normal subjects could complete the SWS optimally within 10s using 7 swallows, while in dysphagic ALS patients, the total duration and the number of swallows were significantly increased. The novel finding was that the regularity and rhythmicity of the swallowing pattern during SWS was disorganized to irregular and arhythmic pattern in 43% of the ALS patients. The duration and speed of swallowing were the most sensitive parameters for the disturbed oropharyngeal motility during SWS. The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types. The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Sequential Revision of Belief, Trust Type, and the Order Effect.
Entin, Elliot E; Serfaty, Daniel
2017-05-01
Objective To investigate how people's sequential adjustments to their position are impacted by the source of the information. Background There is an extensive body of research on how the order in which new information is received affects people's final views and decisions as well as research on how they adjust their views in light of new information. Method Seventy college-aged students, 60% of whom were women, completed one of eight different randomly distributed booklets prepared to create the eight different between-subjects treatment conditions created by crossing the two levels of information source with the four level of order conditions. Based on the information provided, participants estimated the probability of an attack, the dependent measure. Results Confirming information from an expert intelligence officer significantly increased the attack probability from the initial position more than confirming information from a longtime friend. Conversely, disconfirming information from a longtime friend decreased the attack probability significantly more than the same information from an intelligence officer. Conclusion It was confirmed that confirming and disconfirming evidence were differentially affected depending on information source, either an expert or a close friend. The difference appears to be due to the existence of two kinds of trust: cognitive-based imbued to an expert and affective-based imbued to a close friend. Application Purveyors of information need to understand that it is not only the content of a message that counts but that other forces are at work such as the order in which information is received and characteristics of the information source.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Practical method and device for enhancing pulse contrast ratio for lasers and electron accelerators
Zhang, Shukui; Wilson, Guy
2014-09-23
An apparatus and method for enhancing pulse contrast ratios for drive lasers and electron accelerators. The invention comprises a mechanical dual-shutter system wherein the shutters are placed sequentially in series in a laser beam path. Each shutter of the dual shutter system has an individually operated trigger for opening and closing the shutter. As the triggers are operated individually, the delay between opening and closing first shutter and opening and closing the second shutter is variable providing for variable differential time windows and enhancement of pulse contrast ratio.
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
Health Monitoring of a Satellite System
NASA Technical Reports Server (NTRS)
Chen, Robert H.; Ng, Hok K.; Speyer, Jason L.; Guntur, Lokeshkumar S.; Carpenter, Russell
2004-01-01
A health monitoring system based on analytical redundancy is developed for satellites on elliptical orbits. First, the dynamics of the satellite including orbital mechanics and attitude dynamics is modelled as a periodic system. Then, periodic fault detection filters are designed to detect and identify the satellite's actuator and sensor faults. In addition, parity equations are constructed using the algebraic redundant relationship among the actuators and sensors. Furthermore, a residual processor is designed to generate the probability of each of the actuator and sensor faults by using a sequential probability test. Finally, the health monitoring system, consisting of periodic fault detection lters, parity equations and residual processor, is evaluated in the simulation in the presence of disturbances and uncertainty.
Peng, Shu; Pan, Yu‐Chen; Wang, Yaling; Xu, Zhe; Chen, Chao
2017-01-01
Abstract The introduction of controlled self‐assembly into living organisms opens up desired biomedical applications in wide areas including bioimaging/assays, drug delivery, and tissue engineering. Besides the enzyme‐activated examples reported before, controlled self‐assembly under integrated stimuli, especially in the form of sequential input, is unprecedented and ultimately challenging. This study reports a programmable self‐assembling strategy in living cells under sequentially integrated control of both endogenous and exogenous stimuli. Fluorescent polymerized vesicles are constructed by using cholinesterase conversion followed by photopolymerization and thermochromism. Furthermore, as a proof‐of‐principle application, the cell apoptosis involved in the overexpression of cholinesterase in virtue of the generated fluorescence is monitored, showing potential in screening apoptosis‐inducing drugs. The approach exhibits multiple advantages for bioimaging in living cells, including specificity to cholinesterase, red emission, wash free, high signal‐to‐noise ratio. PMID:29201625
Peng, Shu; Pan, Yu-Chen; Wang, Yaling; Xu, Zhe; Chen, Chao; Ding, Dan; Wang, Yongjian; Guo, Dong-Sheng
2017-11-01
The introduction of controlled self-assembly into living organisms opens up desired biomedical applications in wide areas including bioimaging/assays, drug delivery, and tissue engineering. Besides the enzyme-activated examples reported before, controlled self-assembly under integrated stimuli, especially in the form of sequential input, is unprecedented and ultimately challenging. This study reports a programmable self-assembling strategy in living cells under sequentially integrated control of both endogenous and exogenous stimuli. Fluorescent polymerized vesicles are constructed by using cholinesterase conversion followed by photopolymerization and thermochromism. Furthermore, as a proof-of-principle application, the cell apoptosis involved in the overexpression of cholinesterase in virtue of the generated fluorescence is monitored, showing potential in screening apoptosis-inducing drugs. The approach exhibits multiple advantages for bioimaging in living cells, including specificity to cholinesterase, red emission, wash free, high signal-to-noise ratio.
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Health Professionals Prefer to Communicate Risk-Related Numerical Information Using "1-in-X" Ratios.
Sirota, Miroslav; Juanchich, Marie; Petrova, Dafina; Garcia-Retamero, Rocio; Walasek, Lukasz; Bhatia, Sudeep
2018-04-01
Previous research has shown that format effects, such as the "1-in-X" effect-whereby "1-in-X" ratios lead to a higher perceived probability than "N-in-N*X" ratios-alter perceptions of medical probabilities. We do not know, however, how prevalent this effect is in practice; i.e., how often health professionals use the "1-in-X" ratio. We assembled 4 different sources of evidence, involving experimental work and corpus studies, to examine the use of "1-in-X" and other numerical formats quantifying probability. Our results revealed that the use of the "1-in-X" ratio is prevalent and that health professionals prefer this format compared with other numerical formats (i.e., the "N-in-N*X", %, and decimal formats). In Study 1, UK family physicians preferred to communicate prenatal risk using a "1-in-X" ratio (80.4%, n = 131) across different risk levels and regardless of patients' numeracy levels. In Study 2, a sample from the UK adult population ( n = 203) reported that most GPs (60.6%) preferred to use "1-in-X" ratios compared with other formats. In Study 3, "1-in-X" ratios were the most commonly used format in a set of randomly sampled drug leaflets describing the risk of side effects (100%, n = 94). In Study 4, the "1-in-X" format was the most commonly used numerical expression of medical probabilities or frequencies on the UK's NHS website (45.7%, n = 2,469 sentences). The prevalent use of "1-in-X" ratios magnifies the chances of increased subjective probability. Further research should establish clinical significance of the "1-in-X" effect.
Adverse Outcomes in Infantile Bilateral Developmental Dysplasia of the Hip.
Morbi, Abigail H M; Carsi, Belen; Gorianinov, Vitalli; Clarke, Nicholas M P
2015-01-01
It is believed that bilateral developmental dysplasia of the hip (DDH) has poorer outcomes with higher rates of avascular necrosis (AVN) and reintervention, compared with unilateral DDH. However, there is limited evidence in the literature, with few studies looking specifically at bilateral cases. A retrospective review of 36 patients (72 hips) with >4 years of follow-up. Patient population included surgically treated DDH including late presentations and failures of conservative treatment. The dislocated hips underwent either simultaneous closed or 1 open and 1 closed, or sequential open reduction. AVN and secondary procedures were used as endpoints for analysis as well as clinical and radiologic outcomes. At the last follow-up, 33% of hips had radiologic signs of AVN. Those hips that had no ossific nucleus (ON) at the time of surgery had an odds ratio of developing AVN of 3.05 and a statistically significant association between the 2 variables, whereas open/closed or simultaneous/sequential reduction did not increase the risk for AVN. In addition, 45.8% of those hips required further surgery. The estimated odds ratio of needing additional surgery after simultaneous reduction was 4.04. Clinically, 79.2% of the hips were graded as McKay I, whereas radiologically only 38.8% were Severin I. The AVN rate in bilateral DDH treated surgically is greater than the rate noted in unilateral cases from the same institution undergoing identical protocols. There was no difference in AVN rates between simultaneous and sequential or between the first and second hip to be sequentially reduced. Presence of ON decreases the risk for AVN, suggesting that in bilateral cases, awaiting the appearance of the ON is an important tool to reduce the incidence of AVN. IV.
Spinal cord ischemia after simultaneous and sequential treatment of multilevel aortic disease.
Piffaretti, Gabriele; Bonardelli, Stefano; Bellosta, Raffaello; Mariscalco, Giovanni; Lomazzi, Chiara; Tolenaar, Jip L; Zanotti, Camilla; Guadrini, Cristina; Sarcina, Antonio; Castelli, Patrizio; Trimarchi, Santi
2014-10-01
The aim of the present study is to report a risk analysis for spinal cord injury in a recent cohort of patients with simultaneous and sequential treatment of multilevel aortic disease. We performed a multicenter study with a retrospective data analysis. Simultaneous treatment refers to descending thoracic and infrarenal aortic lesions treated during the same operation, and sequential treatment refers to separate operations. All descending replacements were managed with endovascular repair. Of 4320 patients, multilevel aortic disease was detected in 77 (1.8%). Simultaneous repair was performed in 32 patients (41.5%), and a sequential repair was performed in 45 patients (58.4%). Postoperative spinal cord injury developed in 6 patients (7.8%). At multivariable analysis, the distance of the distal aortic neck from the celiac trunk was the only independent predictor of postoperative spinal cord injury (odds ratio, 0.75; 95% confidence interval, 0.56-0.99; P=.046); open surgical repair of the abdominal aortic disease was associated with a higher risk of spinal cord injury but did not reach statistical significance (odds ratio, 0.16; 95% confidence interval, 0.02-1.06; P=.057). Actuarial survival estimates at 1, 2, and 5 years after the procedure were 80%±5%, 68%±6%, and 63%±7%, respectively. Spinal cord injury did not impair survival (P=.885). In our experience, the risk of spinal cord injury is still substantial at 8% in patients with multilevel aortic disease. The distance of the distal landing zone from the celiac trunk is a significant predictor of spinal cord ischemia. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spallicci, Alessandro D. A. M., E-mail: spallicci@cnrs-orleans.fr
2013-02-20
Gravitational waves coming from supermassive black hole binaries (SMBHBs) are targeted by both the Pulsar Timing Array (PTA) and Space Laser Interferometry (SLI). The possibility of a single SMBHB being tracked first by PTA, through inspiral, and later by SLI, up to merger and ring-down, has been previously suggested. Although the bounding parameters are drawn by the current PTA or the upcoming Square Kilometer Array (SKA), and by the New Gravitational Observatory (NGO), derived from the Laser Interferometer Space Antenna (LISA), this paper also addresses sequential detection beyond specific project constraints. We consider PTA-SKA, which is sensitive from 10{sup -9}more » to p Multiplication-Sign 10{sup -7} Hz (p = 4, 8), and SLI, which operates from s Multiplication-Sign 10{sup -5} up to 1 Hz (s = 1, 3). An SMBHB in the range of 2 Multiplication-Sign 10{sup 8}-2 Multiplication-Sign 10{sup 9} M {sub Sun} (the masses are normalized to a (1 + z) factor, the redshift lying between z = 0.2 and z = 1.5) moves from the PTA-SKA to the SLI band over a period ranging from two months to fifty years. By combining three supermassive black hole (SMBH)-host relations with three accretion prescriptions, nine astrophysical scenarios are formed. They are then related to three levels of pulsar timing residuals (50, 5, 1 ns), generating 27 cases. For residuals of 1 ns, sequential detection probability will never be better than 4.7 Multiplication-Sign 10{sup -4} yr{sup -2} or 3.3 Multiplication-Sign 10{sup -6} yr{sup -2} (per year to merger and per year of survey), according to the best and worst astrophysical scenarios, respectively; put differently this means one sequential detection every 46 or 550 years for an equivalent maximum time to merger and duration of the survey. The chances of sequential detection are further reduced by increasing values of the s parameter (they vanish for s = 10) and of the SLI noise, and by decreasing values of the remnant spin. The spread in the predictions diminishes when timing precision is improved or the SLI low-frequency cutoff is lowered. So while transit times and the SLI signal-to-noise ratio (S/N) may be adequate, the likelihood of sequential detection is severely hampered by the current estimates on the number-just a handful-of individual inspirals observable by PTA-SKA, and to a lesser extent by the wide gap between the pulsar timing and space interferometry bands, and by the severe requirements on pulsar timing residuals. Optimization of future operational scenarios for SKA and SLI is briefly dealt with, since a detection of even a single event would be of paramount importance for the understanding of SMBHBs and of the astrophysical processes connected to their formation and evolution.« less
Sequential CFAR detectors using a dead-zone limiter
NASA Astrophysics Data System (ADS)
Tantaratana, Sawasd
1990-09-01
The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
Gleason-Busch theorem for sequential measurements
NASA Astrophysics Data System (ADS)
Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah
2017-12-01
Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
A Looping-Based Model for Quenching Repression
Pollak, Yaroslav; Goldberg, Sarah; Amit, Roee
2017-01-01
We model the regulatory role of proteins bound to looped DNA using a simulation in which dsDNA is represented as a self-avoiding chain, and proteins as spherical protrusions. We simulate long self-avoiding chains using a sequential importance sampling Monte-Carlo algorithm, and compute the probabilities for chain looping with and without a protrusion. We find that a protrusion near one of the chain’s termini reduces the probability of looping, even for chains much longer than the protrusion–chain-terminus distance. This effect increases with protrusion size, and decreases with protrusion-terminus distance. The reduced probability of looping can be explained via an eclipse-like model, which provides a novel inhibitory mechanism. We test the eclipse model on two possible transcription-factor occupancy states of the D. melanogaster eve 3/7 enhancer, and show that it provides a possible explanation for the experimentally-observed eve stripe 3 and 7 expression patterns. PMID:28085884
Probability matching and strategy availability.
Koehler, Derek J; James, Greta
2010-09-01
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.
Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François
2009-02-13
To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100,000 pregnancies). Contingent screening, with a first trimester cut-off value for high risk of 1 in 9, is the preferred option for prenatal screening of women for pregnancies affected by Down's syndrome.
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Vojtechova, Iveta; Petrasek, Tomas; Hatalova, Hana; Pistikova, Adela; Vales, Karel; Stuchlik, Ales
2016-05-15
The prevention of engram interference, pattern separation, flexibility, cognitive coordination and spatial navigation are usually studied separately at the behavioral level. Impairment in executive functions is often observed in patients suffering from schizophrenia. We have designed a protocol for assessing these functions all together as behavioral separation. This protocol is based on alternated or sequential training in two tasks testing different hippocampal functions (the Morris water maze and active place avoidance), and alternated or sequential training in two similar environments of the active place avoidance task. In Experiment 1, we tested, in adult rats, whether the performance in two different spatial tasks was affected by their order in sequential learning, or by their day-to-day alternation. In Experiment 2, rats learned to solve the active place avoidance task in two environments either alternately or sequentially. We found that rats are able to acquire both tasks and to discriminate both similar contexts without obvious problems regardless of the order or the alternation. We used two groups of rats, controls and a rat model of psychosis induced by a subchronic intraperitoneal application of 0.08mg/kg of dizocilpine (MK-801), a non-competitive antagonist of NMDA receptors. Dizocilpine had no selective effect on parallel/sequential learning of tasks/contexts. However, it caused hyperlocomotion and a significant deficit in learning in the active place avoidance task regardless of the task alternation. Cognitive coordination tested by this task is probably more sensitive to dizocilpine than spatial orientation because no hyperactivity or learning impairment was observed in the Morris water maze. Copyright © 2016 Elsevier B.V. All rights reserved.
Kovacs, Gabor G; Xie, Sharon X; Robinson, John L; Lee, Edward B; Smith, Douglas H; Schuck, Theresa; Lee, Virginia M-Y; Trojanowski, John Q
2018-06-11
Aging-related tau astrogliopathy (ARTAG) describes tau pathology in astrocytes in different locations and anatomical regions. In the present study we addressed the question of whether sequential distribution patterns can be recognized for ARTAG or astroglial tau pathologies in both primary FTLD-tauopathies and non-FTLD-tauopathy cases. By evaluating 687 postmortem brains with diverse disorders we identified ARTAG in 455. We evaluated frequencies and hierarchical clustering of anatomical involvement and used conditional probability and logistic regression to model the sequential distribution of ARTAG and astroglial tau pathologies across different brain regions. For subpial and white matter ARTAG we recognize three and two patterns, respectively, each with three stages initiated or ending in the amygdala. Subependymal ARTAG does not show a clear sequential pattern. For grey matter (GM) ARTAG we recognize four stages including a striatal pathway of spreading towards the cortex and/or amygdala, and the brainstem, and an amygdala pathway, which precedes the involvement of the striatum and/or cortex and proceeds towards the brainstem. GM ARTAG and astrocytic plaque pathology in corticobasal degeneration follows a predominantly frontal-parietal cortical to temporal-occipital cortical, to subcortical, to brainstem pathway (four stages). GM ARTAG and tufted astrocyte pathology in progressive supranuclear palsy shows a striatum to frontal-parietal cortical to temporal to occipital, to amygdala, and to brainstem sequence (four stages). In Pick's disease cases with astroglial tau pathology an overlapping pattern with PSP can be appreciated. We conclude that tau-astrogliopathy type-specific sequential patterns cannot be simplified as neuron-based staging systems. The proposed cytopathological and hierarchical stages provide a conceptual approach to identify the initial steps of the pathogenesis of tau pathologies in ARTAG and primary FTLD-tauopathies.
Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki
2018-01-01
Leaching of hazardous trace elements from excavated urban soils during construction of cities has received considerable attention in recent years in Japan. A new concept, the pollution potential leaching index (PPLI), was applied to assess the risk of arsenic (As) leaching from excavated soils. Sequential leaching tests (SLT) with two liquid-to-solid (L/S) ratios (10 and 20Lkg -1 ) were conducted to determine the PPLI values, which represent the critical cumulative L/S ratios at which the average As concentrations in the cumulative leachates are reduced to critical values (10 or 5µgL -1 ). Two models (a logarithmic function model and an empirical two-site first-order leaching model) were compared to estimate the PPLI values. The fractionations of As before and after SLT were extracted according to a five-step sequential extraction procedure. Ten alkaline excavated soils were obtained from different construction projects in Japan. Although their total As contents were low (from 6.75 to 79.4mgkg -1 ), the As leaching was not negligible. Different L/S ratios at each step of the SLT had little influence on the cumulative As release or PPLI values. Experimentally determined PPLI values were in agreement with those from model estimations. A five-step SLT with an L/S of 10Lkg -1 at each step, combined with a logarithmic function fitting was suggested for the easy estimation of PPLI. Results of the sequential extraction procedure showed that large portions of more labile As fractions (non-specifically and specifically sorbed fractions) were removed during long-term leaching and so were small, but non-negligible, portions of strongly bound As fractions. Copyright © 2017 Elsevier Inc. All rights reserved.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Sequential infiltration synthesis for advanced lithography
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing
2015-03-17
A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.
A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.
Zhao, Lei; Mi, Dong; Sun, Yeqing
2017-05-07
The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.
Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert
2015-09-01
The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
Objectives To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. Methods 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Results Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Conclusion Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels. PMID:26288186
Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.
Estimation of probability of failure for damage-tolerant aerospace structures
NASA Astrophysics Data System (ADS)
Halbert, Keith
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.
Brick tunnel randomization and the momentum of the probability mass.
Kuznetsova, Olga M
2015-12-30
The allocation space of an unequal-allocation permuted block randomization can be quite wide. The development of unequal-allocation procedures with a narrower allocation space, however, is complicated by the need to preserve the unconditional allocation ratio at every step (the allocation ratio preserving (ARP) property). When the allocation paths are depicted on the K-dimensional unitary grid, where allocation to the l-th treatment is represented by a step along the l-th axis, l = 1 to K, the ARP property can be expressed in terms of the center of the probability mass after i allocations. Specifically, for an ARP allocation procedure that randomizes subjects to K treatment groups in w1 :⋯:wK ratio, w1 +⋯+wK =1, the coordinates of the center of the mass are (w1 i,…,wK i). In this paper, the momentum with respect to the center of the probability mass (expected imbalance in treatment assignments) is used to compare ARP procedures in how closely they approximate the target allocation ratio. It is shown that the two-arm and three-arm brick tunnel randomizations (BTR) are the ARP allocation procedures with the tightest allocation space among all allocation procedures with the same allocation ratio; the two-arm BTR is the minimum-momentum two-arm ARP allocation procedure. Resident probabilities of two-arm and three-arm BTR are analytically derived from the coordinates of the center of the probability mass; the existence of the respective transition probabilities is proven. Probability of deterministic assignments with BTR is found generally acceptable. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Situation models and memory: the effects of temporal and causal information on recall sequence.
Brownstein, Aaron L; Read, Stephen J
2007-10-01
Participants watched an episode of the television show Cheers on video and then reported free recall. Recall sequence followed the sequence of events in the story; if one concept was observed immediately after another, it was recalled immediately after it. We also made a causal network of the show's story and found that recall sequence followed causal links; effects were recalled immediately after their causes. Recall sequence was more likely to follow causal links than temporal sequence, and most likely to follow causal links that were temporally sequential. Results were similar at 10-minute and 1-week delayed recall. This is the most direct and detailed evidence reported on sequential effects in recall. The causal network also predicted probability of recall; concepts with more links and concepts on the main causal chain were most likely to be recalled. This extends the causal network model to more complex materials than previous research.
Children's sequential information search is sensitive to environmental probabilities.
Nelson, Jonathan D; Divjak, Bojana; Gudmundsdottir, Gudny; Martignon, Laura F; Meder, Björn
2014-01-01
We investigated 4th-grade children's search strategies on sequential search tasks in which the goal is to identify an unknown target object by asking yes-no questions about its features. We used exhaustive search to identify the most efficient question strategies and evaluated the usefulness of children's questions accordingly. Results show that children have good intuitions regarding questions' usefulness and search adaptively, relative to the statistical structure of the task environment. Search was especially efficient in a task environment that was representative of real-world experiences. This suggests that children may use their knowledge of real-world environmental statistics to guide their search behavior. We also compared different related search tasks. We found positive transfer effects from first doing a number search task on a later person search task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Monterial, Mateusz; Clarke, Shaun
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less
Recent Results with CVD Diamond Trackers
NASA Astrophysics Data System (ADS)
Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Procario, M.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Runolfsson, O.; Russ, J.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Vittone, E.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; White, C.; Ziock, H.; Zoeller, M.; RD42 Collaboration
1999-08-01
We present recent results on the use of Chemical Vapor Deposition (CVD) diamond microstrip detectors for charged particle tracking. A series of detectors was fabricated using 1 x 1 cm 2 diamonds. Good signal-to-noise ratios were observed using both slow and fast readout electronics. For slow readout electronics, 2 μs shaping time, the most probable signal-to-noise ratio was 50 to 1. For fast readout electronics, 25 ns peaking time, the most probable signal-to-noise ratio was 7 to 1. Using the first 2 x 4 cm 2 diamond from a production CVD reactor with slow readout electronics, the most probable signal-to-noise ratio was 23 to 1. The spatial resolution achieved for the detectors was consistent with the digital resolution expected from the detector pitch.
NASA Astrophysics Data System (ADS)
Liu, Luyao; Feng, Minquan
2018-03-01
[Objective] This study quantitatively evaluated risk probabilities of sudden water pollution accidents under the influence of risk sources, thus providing an important guarantee for risk source identification during water diversion from the Hanjiang River to the Weihe River. [Methods] The research used Bayesian networks to represent the correlation between accidental risk sources. It also adopted the sequential Monte Carlo algorithm to combine water quality simulation with state simulation of risk sources, thereby determining standard-exceeding probabilities of sudden water pollution accidents. [Results] When the upstream inflow was 138.15 m3/s and the average accident duration was 48 h, the probabilities were 0.0416 and 0.0056 separately. When the upstream inflow was 55.29 m3/s and the average accident duration was 48 h, the probabilities were 0.0225 and 0.0028 separately. [Conclusions] The research conducted a risk assessment on sudden water pollution accidents, thereby providing an important guarantee for the smooth implementation, operation, and water quality of the Hanjiang-to-Weihe River Diversion Project.
Age, period, and cohort analysis of regular dental care behavior and edentulism: A marginal approach
2011-01-01
Background To analyze the regular dental care behavior and prevalence of edentulism in adult Danes, reported in sequential cross-sectional oral health surveys by the application of a marginal approach to consider the possible clustering effect of birth cohorts. Methods Data from four sequential cross-sectional surveys of non-institutionalized Danes conducted from 1975-2005 comprising 4330 respondents aged 15+ years in 9 birth cohorts were analyzed. The key study variables were seeking dental care on an annual basis (ADC) and edentulism. For the analysis of ADC, survey year, age, gender, socio-economic status (SES) group, denture-wearing, and school dental care (SDC) during childhood were considered. For the analysis of edentulism, only respondents aged 35+ years were included. Survey year, age, gender, SES group, ADC, and SDC during childhood were considered as the independent factors. To take into account the clustering effect of birth cohorts, marginal logistic regressions with an independent correlation structure in generalized estimating equations (GEE) were carried out, with PROC GENMOD in SAS software. Results The overall proportion of people seeking ADC increased from 58.8% in 1975 to 86.7% in 2005, while for respondents aged 35 years or older, the overall prevalence of edentulism (35+ years) decreased from 36.4% in 1975 to 5.0% in 2005. Females, respondents in the higher SES group, in more recent survey years, with no denture, and receiving SDC in all grades during childhood were associated with higher probability of seeking ADC regularly (P < 0.05). The interaction of SDC and age (P < 0.0001) was significant. The probabilities of seeking ADC were even higher among subjects with SDC in all grades and aged 45 years or older. Females, older age group, respondents in earlier survey years, not seeking ADC, lower SES group, and not receiving SDC in all grades were associated with higher probability of being edentulous (P < 0.05). Conclusions With the use of GEE, the potential clustering effect of birth cohorts in sequential cross-sectional oral health survey data could be appropriately considered. The success of Danish dental health policy was demonstrated by a continued increase of regular dental visiting habits and tooth retention in adults because school dental care was provided to Danes in their childhood. PMID:21410991
Anomaly Detection in Dynamic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turcotte, Melissa
2014-10-14
Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less
NASA Astrophysics Data System (ADS)
Tang, T.; Raub, T. D.; Wang, Z.
2013-12-01
Strontium isotope chemostratigraphy in limestones appear to track a near monotonic rising trend from the tail of the breakup of the supercontinent Rodinia at ~750 Ma into the latest Ediacaran Period at ~ 550 Ma (Halverson et al., 2007). This offers a potentially powerful tool to date carbonates occurring within this time period of stepwise environmental oxidation. Furthermore, aspects of the Snowball Earth hypothesis predict that some magnitude of 87Sr/86Sr excursions of seawater is expected at multiple intervals during this time, because balancing influences from prolonged synglacial hydrothermal input in entombed oceans, deglacial freshwater plumes of varying temperature and salinity, and enhanced postglacial silicate weathering from the continents can leave multiple isotopic trends plausible during these critical intervals. The positions of these deglaciations appear to correlate specifically to marine oxidation events, with cause/effect relations still under investigation. To capitalize on all of these interpretive possibilities, it is crucial to establish a high-precision and high-resolution strontium chemostratigraphic record over both short and long timescales. However, difficulties in establishing such a record based on carbonates are associated with 1) diagenetic influence over extensive geological time; and 2) petrologic complexities of the studied samples. Using a sequential digestion technique, Liu et al. (2013) demonstrated that these hurdles can be overcome, and that primary 87Sr/86Sr ratios of contemporaneous seawater can be obtained from Marinoan cap dolostones (~635 Ma), which have considerably less Sr than limestones, and whose Sr isotope compositions are commonly ignored in the strontium chemostratigraphic record. This leads to several viable, specific interpretations about the origin of Marinoan cap carbonate: either 1) very fast deposition; 2) slow deposition in a long-Sr-residence-time ocean; 3) mid-cap deposition in a freshwater "Glacial Lake Harland" of high-Ca, Sr composition; or 4) brine-influenced diagenesis exploiting specific horizons in the cap. As a broader implication, many extant Sr-isotope chemostratigraphies of Marinoan cap carbonate may be inaccurate, and in general, recrystallized impure (and low-Sr) carbonates, and dolomites in particular, are probably best studied with the serial digestion technique. We apply this sequential digestion technique to another Ediacaran cap carbonate with significant siliciclastic content, the anomalous cap limestone synchronous with deglaciation of ~581 Ma Gaskiers ice age in Newfoundland's Avalon zone. Although some textural and compositional differences exist between ~635 Ma Nuccaleena and ~581 Ma Gaskiers cap, the sequential digestion technique again appears to provide clarity by suggesting less-altered and more-altered fractions from various sample levels. We will discuss implications for the nature of Gaskiers glaciation and accompanying environmental oxidation, its global correlations, and the utility of the existing Ediacaran Sr-chemostratigraphic reference curve.
Definition and Measurement of Selection Bias: From Constant Ratio to Constant Difference
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2006-01-01
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
Making Better Use of Bandwidth: Data Compression and Network Management Technologies
2005-01-01
data , the compression would not be a success. A key feature of the Lempel - Ziv family of algorithms is that the...citeseer.nj.nec.com/yu02motion.html. Ziv , J., and A. Lempel , “A Universal Algorithm for Sequential Data Compression ,” IEEE Transac- tions on Information Theory, Vol. 23, 1977, pp. 337–342. ...probability models – Lempel - Ziv – Prediction by partial matching The central component of a lossless compression algorithm
Attractors in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Alexandre A. P.
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
Attractors in complex networks.
Rodrigues, Alexandre A P
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
External versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.
1983-06-01
tions. Linda is a teacher in elementary school . Linda works in a bookstore and takes Yoga classes. Linda is active in the feminist movement. (F) Linda...sophisticated group consisted of PhD students in the decision science program of the Stanford Busi- ness School , all with several advanced courses in... mind by seemingly incon- sequential cues. There is a contrast worthy of note between the effectiveness of exten- sional cues in the health-survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Stephen C.; Bettis Homan, Stephanie; Weiss, Emily A.
2016-01-28
This paper describes the use of cadmium sulfide quantum dots (CdS QDs) as visible-light photocatalysts for the reduction of nitrobenzene to aniline through six sequential photoinduced, proton-coupled electron transfers. At pH 3.6–4.3, the internal quantum yield of photons-to-reducing electrons is 37.1% over 54 h of illumination, with no apparent decrease in catalyst activity. Monitoring of the QD exciton by transient absorption reveals that, for each step in the catalytic cycle, the sacrificial reductant, 3-mercaptopropionic acid, scavenges the excitonic hole in ~5 ps to form QD•–; electron transfer to nitrobenzene or the intermediates nitrosobenzene and phenylhydroxylamine then occurs on the nanosecondmore » time scale. The rate constants for the single-electron transfer reactions are correlated with the driving forces for the corresponding proton-coupled electron transfers. This result suggests, but does not prove, that electron transfer, not proton transfer, is rate-limiting for these reactions. Nuclear magnetic resonance analysis of the QD–molecule systems shows that the photoproduct aniline, left unprotonated, serves as a poison for the QD catalyst by adsorbing to its surface. Performing the reaction at an acidic pH not only encourages aniline to desorb but also increases the probability of protonated intermediates; the latter effect probably ensures that recruitment of protons is not rate-limiting.« less
Item Selection Criteria with Practical Constraints for Computerized Classification Testing
ERIC Educational Resources Information Center
Lin, Chuan-Ju
2011-01-01
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
NASA Astrophysics Data System (ADS)
Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar
2014-05-01
Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help other users to implement this method. The need of user intervention is reduced to a minimum through the usage of a cross-platform script. Finally, as in the previous study, the results are compared with those from the SNHT, Pettit and Buishand range tests, which were applied to composite (ratio) reference series. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").
Frei, Christopher R; Burgess, David S
2005-09-01
To evaluate the pharmacodynamics of four intravenous antimicrobial regimens-ceftriaxone 1 g, gatifloxacin 400 mg, levofloxacin 500 mg, and levofloxacin 750 mg, each every 24 hours-against recent Streptococcus pneumoniae isolates. Pharmacodynamic analysis using Monte Carlo simulation. The Surveillance Network (TSN) 2002 database. Streptococcus pneumoniae isolates (7866 isolates) were stratified according to penicillin susceptibilities as follows: susceptible (4593), intermediate (1986), and resistant (1287). Risk analysis software was used to simulate 10,000 patients by integrating published pharmacokinetic parameters, their variability, and minimum inhibitory concentration (MIC) distributions from the TSN database. Probability of target attainment was determined for percentage of time above the MIC (%T > MIC) from 0-100% for ceftriaxone and area under the concentration-time curve (AUC):MIC ratio from 0-150 for the fluoroquinolones. For ceftriaxone, probability of target attainment remained 90% or greater against the three isolate groups until a %T > MIC of 70% or greater, and it remained 90% or greater against susceptible and intermediate isolates over the entire interval (%T > MIC 0-100%). For levofloxacin 500 mg, probability of target attainment was 90% at an AUC:MIC < or = 30, but the curve declined sharply with further increases in pharmacodynamic target. Levofloxacin 750 mg achieved a probability of target attainment of 99% at an AUC:MIC ratio < or = 30; the probability remained approximately 90% until a target of 70 or greater, when it declined steeply. Gatifloxacin demonstrated a high probability (99%) of target attainment at an AUC:MIC ratio < or = 30, and it remained above 90% until a target of 70. Ceftriaxone maintained high probability of target attainment over a broad range of pharmacodynamic targets regardless of penicillin susceptibility (%T > MIC 0-60%). Levofloxacin 500 mg maintained high probability of target attainment for AUC:MIC ratios 0-30; whereas, levofloxacin 750 mg and gatifloxacin maintained high probability of target attainment for AUC:MIC ratios 0-60. Rate of decline in the pharmacodynamic curve was most pronounced for the two levofloxacin regimens and more gradual for gatifloxacin and ceftriaxone.
Schulze, Christin; Newell, Ben R
2016-07-01
Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.
Wang, Kun; Xu, Feng; Sun, Runcang
2010-01-01
Kraft-AQ pulping lignin was sequentially fractionated by organic solvent extractions and the molecular properties of each fraction were characterized by chemical degradation, GPC, UV, FT-IR, 13C-NMR and thermal analysis. The average molecular weight and polydispersity of each lignin fraction increased with its hydrogen-bonding capacity (Hildebrand solubility parameter). In addition, the ratio of the non-condensed guaiacyl/syringyl units and the content of β-O-4 linkages increased with the increment of the lignin fractions extracted successively with hexane, diethylether, methylene chloride, methanol, and dioxane. Furthermore, the presence of the condensation reaction products was contributed to the higher thermal stability of the larger molecules. PMID:21152286
Sequential infiltration synthesis for advanced lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih
A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned usingmore » photolithography, electron-beam lithography or a block copolymer self-assembly process.« less
Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas
2016-01-01
To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.
NASA Astrophysics Data System (ADS)
Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul
2017-12-01
The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Nguyen, Nam-Trung; Huang, Xiaoyang
2006-06-01
Effective and fast mixing is important for many microfluidic applications. In many cases, mixing is limited by molecular diffusion due to constrains of the laminar flow in the microscale regime. According to scaling law, decreasing the mixing path can shorten the mixing time and enhance mixing quality. One of the techniques for reducing mixing path is sequential segmentation. This technique divides solvent and solute into segments in axial direction. The so-called Taylor-Aris dispersion can improve axial transport by three orders of magnitudes. The mixing path can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by pulse width modulation of the switching signal. This paper first presents a simple time-dependent one-dimensional analytical model for sequential segmentation. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. Next, a micromixer was designed and fabricated based on polymeric micromachining. The micromixer was formed by laminating four polymer layers. The layers are micro machined by a CO(2) laser. Switching of the fluid flows was realized by two piezoelectric valves. Mixing experiments were evaluated optically. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. Furthermore, mixing results at different switching frequencies were investigated. Due to the dynamic behavior of the valves and the fluidic system, mixing quality decreases with increasing switching frequency.
NASA Astrophysics Data System (ADS)
Anand, L. F. M.; Gudennavar, S. B.; Bubbly, S. G.; Kerur, B. R.
2015-12-01
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K β to K α intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak 137Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others' work, establishing a good agreement.
Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J
2008-02-01
Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P < 0.001) and WMI (AAI pacing = 3.29 +/- 1.34 mmHg*mL/m(2)*10(6), simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.
Application of Bayes' theorem for pulse shape discrimination
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Clarke, Shaun; Pozzi, Sara
2015-09-01
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. This allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. A time-correlated measurement of Am-Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.
Application of Bayes' theorem for pulse shape discrimination
Marleau, Peter; Monterial, Mateusz; Clarke, Shaun; ...
2015-06-14
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E
2018-01-01
Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587
Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M
2015-01-01
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.
Cerling, Thure E.; Wittemyer, George; Ehleringer, James R.; Remien, Christopher H.; Douglas-Hamilton, Iain
2009-01-01
The dietary and movement history of individual animals can be studied using stable isotope records in animal tissues, providing insight into long-term ecological dynamics and a species niche. We provide a 6-year history of elephant diet by examining tail hair collected from 4 elephants in the same social family unit in northern Kenya. Sequential measurements of carbon, nitrogen, and hydrogen isotope rations in hair provide a weekly record of diet and water resources. Carbon isotope ratios were well correlated with satellite-based measurements of the normalized difference vegetation index (NDVI) of the region occupied by the elephants as recorded by the global positioning system (GPS) movement record; the absolute amount of C4 grass consumption is well correlated with the maximum value of NDVI during individual wet seasons. Changes in hydrogen isotope ratios coincided very closely in time with seasonal fluctuations in rainfall and NDVI whereas diet shifts to relatively high proportions of grass lagged seasonal increases in NDVI by ≈2 weeks. The peak probability of conception in the population occurred ≈3 weeks after peak grazing. Spatial and temporal patterns of resource use show that the only period of pure browsing by the focal elephants was located in an over-grazed, communally managed region outside the protected area. The ability to extract time-specific longitudinal records on animal diets, and therefore the ecological history of an organism and its environment, provides an avenue for understanding the impact of climate dynamics and land-use change on animal foraging behavior and habitat relations. PMID:19365077
Cerling, Thure E; Wittemyer, George; Ehleringer, James R; Remien, Christopher H; Douglas-Hamilton, Iain
2009-05-19
The dietary and movement history of individual animals can be studied using stable isotope records in animal tissues, providing insight into long-term ecological dynamics and a species niche. We provide a 6-year history of elephant diet by examining tail hair collected from 4 elephants in the same social family unit in northern Kenya. Sequential measurements of carbon, nitrogen, and hydrogen isotope rations in hair provide a weekly record of diet and water resources. Carbon isotope ratios were well correlated with satellite-based measurements of the normalized difference vegetation index (NDVI) of the region occupied by the elephants as recorded by the global positioning system (GPS) movement record; the absolute amount of C(4) grass consumption is well correlated with the maximum value of NDVI during individual wet seasons. Changes in hydrogen isotope ratios coincided very closely in time with seasonal fluctuations in rainfall and NDVI whereas diet shifts to relatively high proportions of grass lagged seasonal increases in NDVI by approximately 2 weeks. The peak probability of conception in the population occurred approximately 3 weeks after peak grazing. Spatial and temporal patterns of resource use show that the only period of pure browsing by the focal elephants was located in an over-grazed, communally managed region outside the protected area. The ability to extract time-specific longitudinal records on animal diets, and therefore the ecological history of an organism and its environment, provides an avenue for understanding the impact of climate dynamics and land-use change on animal foraging behavior and habitat relations.
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
Daikoku, Tatsuya
2018-01-01
Learning and knowledge of transitional probability in sequences like music, called statistical learning and knowledge, are considered implicit processes that occur without intention to learn and awareness of what one knows. This implicit statistical knowledge can be alternatively expressed via abstract medium such as musical melody, which suggests this knowledge is reflected in melodies written by a composer. This study investigates how statistics in music vary over a composer's lifetime. Transitional probabilities of highest-pitch sequences in Ludwig van Beethoven's Piano Sonata were calculated based on different hierarchical Markov models. Each interval pattern was ordered based on the sonata opus number. The transitional probabilities of sequential patterns that are musical universal in music gradually decreased, suggesting that time-course variations of statistics in music reflect time-course variations of a composer's statistical knowledge. This study sheds new light on novel methodologies that may be able to evaluate the time-course variation of composer's implicit knowledge using musical scores.
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
NASA Astrophysics Data System (ADS)
Zhang, H.; Guan, Z. W.; Wang, Q. Y.; Liu, Y. J.; Li, J. K.
2018-05-01
The effects of microstructure and stress ratio on high cycle fatigue of nickel superalloy Nimonic 80A were investigated. The stress ratios of 0.1, 0.5 and 0.8 were chosen to perform fatigue tests in a frequency of 110 Hz. Cleavage failure was observed, and three competing failure crack initiation modes were discovered by a scanning electron microscope, which were classified as surface without facets, surface with facets and subsurface with facets. With increasing the stress ratio from 0.1 to 0.8, the occurrence probability of surface and subsurface with facets also increased and reached the maximum value at R = 0.5, meanwhile the probability of surface initiation without facets decreased. The effect of microstructure on the fatigue fracture behavior at different stress ratios was also observed and discussed. Based on the Goodman diagram, it was concluded that the fatigue strength of 50% probability of failure at R = 0.1, 0.5 and 0.8 is lower than the modified Goodman line.
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Redner, S.; Vazquez, F.
2007-02-01
We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.
More heads choose better than one: Group decision making can eliminate probability matching.
Schulze, Christin; Newell, Ben R
2016-06-01
Probability matching is a robust and common failure to adhere to normative predictions in sequential decision making. We show that this choice anomaly is nearly eradicated by gathering individual decision makers into small groups and asking the groups to decide. The group choice advantage emerged both when participants generated responses for an entire sequence of choices without outcome feedback (Exp. 1a) and when participants made trial-by-trial predictions with outcome feedback after each decision (Exp. 1b). We show that the dramatic improvement observed in group settings stands in stark contrast to a complete lack of effective solitary deliberation. These findings suggest a crucial role of group discussion in alleviating the impact of hasty intuitive responses in tasks better suited to careful deliberation.
Development of a prognostic nomogram for cirrhotic patients with upper gastrointestinal bleeding.
Zhou, Yu-Jie; Zheng, Ji-Na; Zhou, Yi-Fan; Han, Yi-Jing; Zou, Tian-Tian; Liu, Wen-Yue; Braddock, Martin; Shi, Ke-Qing; Wang, Xiao-Dong; Zheng, Ming-Hua
2017-10-01
Upper gastrointestinal bleeding (UGIB) is a complication with a high mortality rate in critically ill patients presenting with cirrhosis. Today, there exist few accurate scoring models specifically designed for mortality risk assessment in critically ill cirrhotic patients with upper gastrointestinal bleeding (CICGIB). Our aim was to develop and evaluate a novel nomogram-based model specific for CICGIB. Overall, 540 consecutive CICGIB patients were enrolled. On the basis of Cox regression analyses, the nomogram was constructed to estimate the probability of 30-day, 90-day, 270-day, and 1-year survival. An upper gastrointestinal bleeding-chronic liver failure-sequential organ failure assessment (UGIB-CLIF-SOFA) score was derived from the nomogram. Performance assessment and internal validation of the model were performed using Harrell's concordance index (C-index), calibration plot, and bootstrap sample procedures. UGIB-CLIF-SOFA was also compared with other prognostic models, such as CLIF-SOFA and model for end-stage liver disease, using C-indices. Eight independent factors derived from Cox analysis (including bilirubin, creatinine, international normalized ratio, sodium, albumin, mean artery pressure, vasopressin used, and hematocrit decrease>10%) were assembled into the nomogram and the UGIB-CLIF-SOFA score. The calibration plots showed optimal agreement between nomogram prediction and actual observation. The C-index of the nomogram using bootstrap (0.729; 95% confidence interval: 0.689-0.766) was higher than that of the other models for predicting survival of CICGIB. We have developed and internally validated a novel nomogram and an easy-to-use scoring system that accurately predicts the mortality probability of CICGIB on the basis of eight easy-to-obtain parameters. External validation is now warranted in future clinical studies.
Sequential ethanol fermentation and anaerobic digestion increases bioenergy yields from duckweed.
Calicioglu, O; Brennan, R A
2018-06-01
The potential for improving bioenergy yields from duckweed, a fast-growing, simple, floating aquatic plant, was evaluated by subjecting the dried biomass directly to anaerobic digestion, or sequentially to ethanol fermentation and then anaerobic digestion, after evaporating ethanol from the fermentation broth. Bioethanol yields of 0.41 ± 0.03 g/g and 0.50 ± 0.01 g/g (glucose) were achieved for duckweed harvested from the Penn State Living-Filter (Lemna obscura) and Eco-Machine™ (Lemna minor/japonica and Wolffia columbiana), respectively. The highest biomethane yield, 390 ± 0.1 ml CH 4 /g volatile solids added, was achieved in a reactor containing fermented duckweed from the Living-Filter at a substrate-to-inoculum (S/I) ratio (i.e., duckweed to microorganism ratio) of 1.0. This value was 51.2% higher than the biomethane yield of a replicate reactor with raw (non-fermented) duckweed. The combined bioethanol-biomethane process yielded 70.4% more bioenergy from duckweed, than if anaerobic digestion had been run alone. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sequentially evaporated thin Y-Ba-Cu-O superconductor films: Composition and processing effects
NASA Technical Reports Server (NTRS)
Valco, George J.; Rohrer, Norman J.; Warner, Joseph D.; Bhasin, Kul B.
1988-01-01
Thin films of YBa2Cu3O(7-beta) have been grown by sequential evaporation of Cu, Y, and BaF2 on SrTiO3 and MgO substrates. The onset temperatures were as high as 93 K while T sub c was 85 K. The Ba/Y ratio was varied from 1.9 to 4.0. The Cu/Y ratio was varied from 2.8 to 3.4. The films were then annealed at various times and temperatures. The times ranged from 15 min to 3 hr, while the annealing temperatures used ranged from 850 C to 900 C. A good correlation was found between transition temperature (T sub c) and the annealing conditions; the films annealed at 900 C on SrTiO3 had the best T sub c's. There was a weaker correlation between composition and T sub c. Barium poor films exhibitied semiconducting normal state resistance behavior while barium rich films were metallic. The films were analyzed by resistance versus temperature measurements and scanning electron microscopy. The analysis of the films and the correlations are reported.
NASA Astrophysics Data System (ADS)
Coman, Tudor; Timpu, Daniel; Nica, Valentin; Vitelaru, Catalin; Rambu, Alicia Petronela; Stoian, George; Olaru, Mihaela; Ursu, Cristian
2017-10-01
Highly conductive transparent Al-doped ZnO (AZO) thin films were obtained at room temperature through sequential PLD (SPLD) from Zn and Al metallic targets in an oxygen/argon gas mixture. We have investigated the structural, electrical and optical properties as a function of the oxygen/argon pressure ratio in the chamber. The measured Hall carrier concentration was found to increase with argon injection from 1.3 × 1020 to 6.7 × 1020 cm-3, while the laser shots ratio for Al/Zn targets ablation was kept constant. This increase was attributed to an enhancement of the substitution doping into the ZnO lattice. The argon injection also leads to an increase of the Hall mobility up to 20 cm2 V-1 s-1, attributed to a reduction of interstitial-type defects. Thus, the approach of using an oxygen/argon gas mixture during SPLD from metallic targets allows obtaining at room temperature AZO samples with high optical transmittance (about 90%) and low electrical resistivity (down to 5.1 × 10-4 Ω cm).
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator-outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator--outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
NASA Technical Reports Server (NTRS)
Carreno, Victor
2006-01-01
This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anand, L. F. M.; Gudennavar, S. B., E-mail: shivappa.b.gudennavar@christuniversity.in; Bubbly, S. G.
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K{sub β} to K{sub α} intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak {sup 137}Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others’ work, establishing a good agreement.
Large-area copper indium diselenide (CIS) process, control and manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, T.J.; Lanning, B.R.; Marshall, C.H.
1997-12-31
Lockheed Martin Astronautics (LMA) has developed a large-area (30x30cm) sequential CIS manufacturing approach amenable to low-cost photovoltaics (PV) production. A prototype CIS manufacturing system has been designed and built with compositional uniformity (Cu/In ratio) verified within {+-}4 atomic percent over the 30x30cm area. CIS device efficiencies have been measured by the National Renewable Energy Laboratory (NREL) at 7% on a flexible non-sodium-containing substrate and 10% on a soda-lime-silica (SLS) glass substrate. Critical elements of the manufacturing capability include the CIS sequential process selection, uniform large-area material deposition, and in-situ process control. Details of the process and large-area manufacturing approach aremore » discussed and results presented.« less
NASA Astrophysics Data System (ADS)
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.
Random packing of regular polygons and star polygons on a flat two-dimensional surface.
Cieśla, Michał; Barbasz, Jakub
2014-08-01
Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.
Pamnani, Shitaldas J.; Nyitray, Alan G.; Abrahamsen, Martha; Rollison, Dana E.; Villa, Luisa L.; Lazcano-Ponce, Eduardo; Huang, Yangxin; Borenstein, Amy; Giuliano, Anna R.
2016-01-01
Background. The purpose of this study was to assess the risk of sequential acquisition of anal human papillomavirus (HPV) infection following a type-specific genital HPV infection for the 9-valent vaccine HPV types and investigate factors associated with sequential infection among men who have sex with women (MSW). Methods. Genital and anal specimens were available for 1348 MSW participants, and HPV genotypes were detected using the Roche Linear Array assay. Sequential risk of anal HPV infection was assessed using hazard ratios (HRs) among men with prior genital infection, compared with men with no prior genital infection, in individual HPV type and grouped HPV analyses. Results. In individual analyses, men with prior HPV 16 genital infections had a significantly higher risk of subsequent anal HPV 16 infections (HR, 4.63; 95% confidence interval [CI], 1.41–15.23). In grouped analyses, a significantly higher risk of sequential type-specific anal HPV infections was observed for any of the 9 types (adjusted HR, 2.80; 95% CI, 1.32–5.99), high-risk types (adjusted HR, 2.65; 95% CI, 1.26, 5.55), and low-risk types (adjusted HR, 5.89; 95% CI, 1.29, 27.01). Conclusions. MSW with prior genital HPV infections had a higher risk of a subsequent type-specific anal infection. The higher risk was not explained by sexual intercourse with female partners. Autoinoculation is a possible mechanism for the observed association. PMID:27489298
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
NASA Astrophysics Data System (ADS)
Gerd, Niestegge
2010-12-01
In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Marino, P; Siani, C; Roché, H; Protière, C; Fumoleau, P; Spielmann, M; Martin, A-L; Viens, P; Le Corroller Soriano, A-G
2010-07-01
Using data from the PACS 01 randomized trial, we evaluated the cost-effectiveness of anthracyclines plus docetaxel (Taxotere; FEC-D) versus anthracyclines alone (FEC100) in patients with node-positive breast cancer. Costs and outcomes were assessed in 1996 patients and the incremental cost-effectiveness ratios (ICERs) were estimated, using quality-adjusted life years (QALYs) as outcome. To deal with uncertainty due to sampling fluctuations, confidence regions around the ICERs were calculated and cost-effectiveness acceptability curves were drawn up. Sensitivity analyses were also carried out to assess the robustness of conclusions. The mean cost of treatment was 33% higher with strategy FEC-D, but this difference decreased to 18% at a 5-year horizon. The ICER of FEC-D versus FEC100 was estimated to be 9665euro per QALY gained (95% confidence interval euro2372-euro55 515). The estimated probability that FEC-D was cost-effective reached >96% for a threshold of euro50 000 per QALY gained. If the price of taxane decreased slightly, the ICER would reach some very reasonable levels and this strategy would therefore be much more cost-effective. The sequential use of FEC100 followed by docetaxel appears to be a cost-effective alternative, even when uncertainty is taken into account.
Application of data cubes for improving detection of water cycle extreme events
NASA Astrophysics Data System (ADS)
Teng, W. L.; Albayrak, A.
2015-12-01
As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.
Voron, T; Eveno, C; Jouvin, I; Beaugerie, A; Lo Dico, R; Dagois, S; Soyer, P; Pocard, M
2015-12-01
Cytoreductive surgery (CRS) with hyperthermic intraperitoneal chemotherapy (HIPEC), used to treat peritoneal surface malignancies (PSM), is a complex procedure with significant major morbidity (MM). To investigate the learning curve (LC) of CRS with HIPEC in a new specialized surgical unit with a fully trained senior surgeon and individualize the variables associated with morbidity and oncological results. A total of 290 consecutive patients with PSM were included. Complete CRS with HIPEC was performed in 204 patients. A risk-adjusted sequential probability ratio test was used to assess the LC on the basis of rates of incomplete cytoreduction (IC) and MM. Complete CRS, MM, and mortality rates were 70.4%, 30.4%, and 2.5%, respectively. Tumor histotype, a high peritoneal cancer index (PCI) and the invaded region were the major independent risk factors for IC, whereas previous surgery, high PCI, stomia realization and blood transfusion were predictors of MM. RA-SPRT showed that 140 and 40 cases were needed to achieve the lowest risk of IC and MM, respectively. CRS with HIPEC to treat PSM has a steep LC. Drastic selection has to be made at the beginning, excluding high PCI, rare peritoneal disease and patients previously operated on. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sugitate, Toshihiro; Fukatsu, Makoto; Ishimi, Katsuhiro; Kohno, Hideki; Wakayama, Tatsuki; Nakamura, Yoshihiro; Miyake, Jun; Asada, Yasuo
In order to establish the sequential hydrogen production from waste starch using a hyperthermophile, Pyrococcus furiosus, and a photosynthetic bacterium, basic studies were done. P. furiosus produced hydrogen and acetate by anaerobic fermentation at 90°C. A photosynthetic bacterium, Rhodobacter sphaeroides RV, was able to produce hydrogen from acetate under anaerobic and light conditions at 30°C. However, Rb. sphaeroides RV was not able to produce hydrogen from acetate in the presence of sodium chloride that was essential for the growth and hydrogen production of P. furiosus although it produced hydrogen from lactate at a reduced rate with 1% sodium chloride. A newly isolated strain, CST-8, from natural environment was, however, able to produce hydrogen from acetate, especially with 3 mM L-alanine and in the presence of 1% sodium chloride. The sequential hydrogen production with P. furiosus and salt-tolerant photosynthetic bacteria could be probable at least in the laboratory experiment scale.
Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?
Springer, A.M.; Estes, J.A.; Van Vliet, Gus B.; Williams, T.M.; Doak, D.F.; Danner, E.M.; Forney, K.A.; Pfister, B.
2003-01-01
Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus "fishing-down" this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis.
Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?
Springer, A. M.; Estes, J. A.; van Vliet, G. B.; Williams, T. M.; Doak, D. F.; Danner, E. M.; Forney, K. A.; Pfister, B.
2003-01-01
Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus “fishing-down” this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis. PMID:14526101
NASA Astrophysics Data System (ADS)
Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun
2017-09-01
By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.
Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe
2015-01-01
Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.
Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.
2016-02-16
Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less
Method of locating a leaking fuel element in a fast breeder power reactor
Honekamp, John R.; Fryer, Richard M.
1978-01-01
Leaking fuel elements in a fast reactor are identified by measuring the ratio of .sup.134 Xe to .sup.133 Xe in the reactor cover gas following detection of a fuel element leak, this ratio being indicative of the power and burnup of the failed fuel element. This procedure can be used to identify leaking fuel elements in a power breeder reactor while continuing operation of the reactor since the ratio measured is that of the gases stored in the plenum of the failed fuel element. Thus, use of a cleanup system for the cover gas makes it possible to identify sequentially a multiplicity of leaking fuel elements without shutting the reactor down.
Sleep problems and internet addiction among children and adolescents: a longitudinal study.
Chen, Yi-Lung; Gau, Susan Shur-Fen
2016-08-01
Although the literature has documented associations between sleep problems and internet addiction, the temporal direction of these relationships has not been established. The purpose of this study is to evaluate the bidirectional relationships between sleep problems and internet addiction among children and adolescents longitudinally. A four-wave longitudinal study was conducted with 1253 children and adolescents in grades 3, 5 and 8 from March 2013 to January 2014. The sleep problems of the student participants were measured by parental reports on the Sleep Habit Questionnaire, which catalogues early insomnia, middle insomnia, disturbed circadian rhythm, periodic leg movements, sleep terrors, sleepwalking, sleep talking, nightmares, bruxism, snoring and sleep apnoea. The severity of internet addiction was measured by students' self-reports on the Chen Internet Addiction Scale. Based on the results of time-lag models, dyssomnias (odds ratio = 1.31), especially early and middle insomnias (odds ratio = 1.74 and 2.24), sequentially predicted internet addiction, and internet addiction sequentially predicted disturbed circadian rhythm (odds ratio = 2.40), regardless of adjustment for gender and age. This is the first study to demonstrate the temporal relationship of early and middle insomnia predicting internet addiction, which subsequently predicts disturbed circadian rhythm. These findings imply that treatment strategies for sleep problems and internet addiction should vary according to the order of their occurrence. © 2016 European Sleep Research Society.
Kong, Xiaohua; Narine, Suresh S
2008-05-01
Sequential interpenetrating polymer networks (IPNs) were prepared using polyurethane (PUR) synthesized from canola oil-based polyol with terminal primary functional groups and poly(methyl methacrylate) (PMMA). The properties of the material were evaluated by dynamic mechanical analysis (DMA), differential scanning calorimetry (DSC), and modulated differential scanning calorimetry (MDSC), as well as tensile properties measurements. The morphology of the IPNs was investigated using scanning electron microscopy (SEM) and MDSC. A five-phase morphology, that is, sol phase, PUR-rich phase, PUR-rich interphase, PMMA-rich interphase, and PMMA-rich phase, was observed for all the IPNs by applying a new quantitative method based on the measurement of the differential of reversing heat capacity versus temperature from MDSC, although not confirmed by SEM, most likely due to resolution restrictions. NCO/OH molar ratios (cross-linking density) and compositional variations of PUR/PMMA both affected the thermal properties and phase behaviors of the IPNs. Higher degrees of mixing occurred for the IPN with higher NCO/OH molar ratio (2.0/1.0) at PUR concentration of 25 wt %, whereas for the IPN with lower NCO/OH molar ratio (1.6/1.0), higher degrees of mixing occurred at PUR concentration of 35 wt %. The mechanical properties of the IPNs were superior to those of the constituent polymers due to the finely divided rubber and plastic combination structures in these IPNs.
Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D
2008-01-01
Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280
Measurement Model Nonlinearity in Estimation of Dynamical Systems
NASA Astrophysics Data System (ADS)
Majji, Manoranjan; Junkins, J. L.; Turner, J. D.
2012-06-01
The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.
Pretreatment Growth Rate Predicts Radiation Response in Vestibular Schwannomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Nina N.; Harvard Medical School, Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts; Niemierko, Andrzej
Purpose: Vestibular schwannomas (VS) are often followed without initial therapeutic intervention because many tumors do not grow and radiation therapy is associated with potential adverse effects. In an effort to determine whether maximizing initial surveillance predicts for later treatment response, the predictive value of preirradiation growth rate of VS on response to radiation therapy was assessed. Methods and Materials: Sixty-four patients with 65 VS were treated with single-fraction stereotactic radiation surgery or fractionated stereotactic radiation therapy. Pre- and postirradiation linear expansion rates were estimated using volumetric measurements on sequential magnetic resonance images (MRIs). In addition, postirradiation tumor volume change wasmore » classified as demonstrating shrinkage (ratio of volume on last follow-up MRI to MRI immediately preceding irradiation <80%), stability (ratio 80%-120%), or expansion (ratio >120%). The median pre- and postirradiation follow-up was 20.0 and 27.5 months, respectively. Seven tumors from neurofibromatosis type 2 (NF2) patients were excluded from statistical analyses. Results: In the 58 non-NF2 patients, there was a trend of correlation between pre- and postirradiation volume change rates (slope on linear regression, 0.29; P=.06). Tumors demonstrating postirradiation expansion had a median preirradiation growth rate of 89%/year, and those without postirradiation expansion had a median preirradiation growth rate of 41%/year (P=.02). As the preirradiation growth rate increased, the probability of postirradiation expansion also increased. Overall, 24.1% of tumors were stable, 53.4% experienced shrinkage, and 22.5% experienced expansion. Predictors of no postirradiation tumor expansion included no prior surgery (P=.01) and slower tumor growth rate (P=.02). The control of tumors in NF2 patients was only 43%. Conclusions: Radiation therapy is an effective treatment for VS, but tumors that grow quickly preirradiation may be more likely to increase in size. Clinicians should take into account tumor growth rate when counseling patients about treatment options.« less
Speciation of 210Po and 210Pb in air particulates determined by sequential extraction.
Al-Masri, M S; Al-Karfan, K; Khalili, H; Hassan, M
2006-01-01
Speciation of (210)Po and (210)Pb in air particulates of two Syrian phosphate sites with different climate conditions has been studied. The sites are the mines and Tartous port at the Mediterranean Sea. Air filters were collected during September 2000 until February 2002 and extracted chemically using different selective fluids in an attempt to identify the different forms of these two radionuclides. The results have shown that the inorganic and insoluble (210)Po and (210)Pb (attached to silica and soluble in mineral acids) portion was found to be high in both sites and reached a maximum value of 94% and 77% in the mine site and Tartous port site, respectively. In addition, only 24% of (210)Pb in air particulates was found to be associated with organic materials probably produced from the incomplete burning of fuel vehicle and similar activities. Moreover, the (210)Po/(210)Pb activity ratio in air particulates was higher than that in all samples at both sites and varied between 3.85 in November 2000 at Tartous port site and 20 in April 2001 at the mine area. These activity ratios were also higher than the natural levels. The (210)Po/(210)Pb activity ratio was also determined in each portion resulting from the selective extraction and found to be higher than that in most samples. The sources of (210)Po excess in these portions are discussed. Soil suspension, which is common in the dry climate dominant in the area, sea water spray and heating of phosphate ores were considered; polonium is more volatile than the lead compounds at even moderate temperature. Furthermore, variations in the chemical forms of (210)Po and (210)Pb during the year were also investigated. However, the results of this study can also be utilized for dose assessment to phosphate industry workers.
Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo
2015-10-06
We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.
Park, Henry S; Gross, Cary P; Makarov, Danil V; Yu, James B
2012-08-01
To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORT vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Henry S.; Gross, Cary P.; Makarov, Danil V.
2012-08-01
Purpose: To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. Methods and Materials: First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORTmore » vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Results: Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Conclusions: Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias.« less
Pérez, Omar D; Aitken, Michael R F; Zhukovsky, Peter; Soto, Fabián A; Urcelay, Gonzalo P; Dickinson, Anthony
2016-12-15
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
/sup 99m/Tc-fibrinogen scanning in adult respiratory distress syndrome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, D.A.; Carvalho, A.C.; Geller, E.
1987-01-01
Fibrin is often seen occluding the lung vessels of patients dying from ARDS and is surrounded by regions of lung necrosis. To learn if we could observe increased or focal fibrin deposition and assess the kinetics of plasma fibrinogen turnover during severe acute respiratory failure, we injected technetium 99m-labeled human purified fibrinogen (Tc-HF) and used gamma camera scanning for as long as 12 h in 13 sequential patients as soon as possible after ICU admission. The fibrinogen uptake rates were determined by calculating the lung:heart radioactivity ratios at each time point. Slopes of the lung:heart ratio versus time were comparedmore » between ARDS and mild acute respiratory failure (ARF). The slope of the lung:heart Tc-HF ratio of the 9 patients with ARDS (2.9 +/- 0.4 units) was markedly higher (p less than 0.02) than the slope of the 4 patients with mild ARF (1.1 +/- 0.4) and the 3 patients studied 5 to 9 months after recovery from respiratory failure (0.7 +/- 0.07). In the 1 patient with ARDS and the 2 patients with mild ARF studied both during acute lung injury and after recovery, the lung:heart Tc-HF ratio had decreased at recovery. To compare the pulmonary uptake of Tc-HF to /sup 99m/Tc-labeled human serum albumin (Tc-HSA), 5 patients were injected with 10 mCi of Tc-HSA, and scanning of the thorax was performed with a similar sequential imaging protocol 24 h after conclusion of the Tc-HF study.« less
Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F
2013-11-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
Qin, Hai-Bo; Zhu, Jian-Ming; Lin, Zhi-Qing; Xu, Wen-Po; Tan, De-Can; Zheng, Li-Rong; Takahashi, Yoshio
2017-06-01
Selenium (Se) speciation in soil is critically important for understanding the solubility, mobility, bioavailability, and toxicity of Se in the environment. In this study, Se fractionation and chemical speciation in agricultural soils from seleniferous areas were investigated using the elaborate sequential extraction and X-ray absorption near-edge structure (XANES) spectroscopy. The speciation results quantified by XANES technique generally agreed with those obtained by sequential extraction, and the combination of both approaches can reliably characterize Se speciation in soils. Results showed that dominant organic Se (56-81% of the total Se) and lesser Se(IV) (19-44%) were observed in seleniferous agricultural soils. A significant decrease in the proportion of organic Se to the total Se was found in different types of soil, i.e., paddy soil (81%) > uncultivated soil (69-73%) > upland soil (56-63%), while that of Se(IV) presented an inverse tendency. This suggests that Se speciation in agricultural soils can be significantly influenced by different cropping systems. Organic Se in seleniferous agricultural soils was probably derived from plant litter, which provides a significant insight for phytoremediation in Se-laden ecosystems and biofortification in Se-deficient areas. Furthermore, elevated organic Se in soils could result in higher Se accumulation in crops and further potential chronic Se toxicity to local residents in seleniferous areas. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyun, Dongho; Cho, Sung Ki, E-mail: sungkismc.cho@samsung.com; Shin, Sung Wook
2016-07-15
PurposeTo evaluate technical feasibility and treatment results of sequential transcatheter arterial chemoembolization (TACE) and cone-beam computed tomography-guided percutaneous radiofrequency ablation (CBCT-RFA) for small hepatocellular carcinoma (HCC) in the caudate lobe.Materials and MethodsInstitutional review board approved this retrospective study. Radiologic database was searched for the patients referred to perform TACE and CBCT-RFA for small caudate HCCs (≤2 cm) between February 2009 and February 2014. A total of 14 patients (12 men and 2 women, mean age; 61.3 years) were included. Percutaneous ultrasonography-guided RFA (pUS-RFA) and surgery were infeasible due to poor conspicuity, inconspicuity or no safe electrode pathway, and poor hepatic reserve. Proceduralmore » success (completion of both TACE and CBCT-RFA), technique efficacy (absence of tumor enhancement at 1 month after treatment), and complication were evaluated. Treatment results including local tumor progression (LTP), intrahepatic distant recurrence (IDR), overall survival (OS), and progression-free survival (PFS) were analyzed.ResultsProcedural success and technique efficacy rates were 78.6 % (11/14) and 90.9 % (10/11), respectively. Average follow-up period was 45.3 months (range, 13.4–64.6 months). The 1-, 3-, and 5-year LTP probabilities were 0, 12.5, and 12.5 %, respectively. IDR occurred in seven patients (63.6 %, 7/11). The 1-, 3-, and 5-year PFS probabilities were 81.8, 51.9, and 26 %, respectively. The 1-, 3-, and 5-year OS probabilities were 100, 80.8, and 80.8 %, respectively.ConclusionCombination of TACE and CBCT-RFA seems feasible for small HCC in the caudate lobe not amenable to pUS-RFA and effective in local tumor control.« less
NASA Astrophysics Data System (ADS)
Owen, D. Des. R.; Pawlowsky-Glahn, V.; Egozcue, J. J.; Buccianti, A.; Bradd, J. M.
2016-08-01
Isometric log ratios of proportions of major ions, derived from intuitive sequential binary partitions, are used to characterize hydrochemical variability within and between coal seam gas (CSG) and surrounding aquifers in a number of sedimentary basins in the USA and Australia. These isometric log ratios are the coordinates corresponding to an orthonormal basis in the sample space (the simplex). The characteristic proportions of ions, as described by linear models of isometric log ratios, can be used for a mathematical-descriptive classification of water types. This is a more informative and robust method of describing water types than simply classifying a water type based on the dominant ions. The approach allows (a) compositional distinctions between very similar water types to be made and (b) large data sets with a high degree of variability to be rapidly assessed with respect to particular relationships/compositions that are of interest. A major advantage of these techniques is that major and minor ion components can be comprehensively assessed and subtle processes—which may be masked by conventional techniques such as Stiff diagrams, Piper plots, and classic ion ratios—can be highlighted. Results show that while all CSG groundwaters are dominated by Na, HCO3, and Cl ions, the proportions of other ions indicate they can evolve via different means and the particular proportions of ions within total or subcompositions can be unique to particular basins. Using isometric log ratios, subtle differences in the behavior of Na, K, and Cl between CSG water types and very similar Na-HCO3 water types in adjacent aquifers are also described. A complementary pair of isometric log ratios, derived from a geochemically-intuitive sequential binary partition that is designed to reflect compositional variability within and between CSG groundwater, is proposed. These isometric log ratios can be used to model a hydrochemical pathway associated with methanogenesis and/or to delineate groundwater associated with high gas concentrations.
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
Dancing Twins: Stellar Hierarchies That Formed Sequentially?
NASA Astrophysics Data System (ADS)
Tokovinin, Andrei
2018-04-01
This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).
Penaloza, Andrea; Mélot, Christian; Dochy, Emmanuelle; Blocklet, Didier; Gevenois, Pierre Alain; Wautrecht, Jean-Claude; Lheureux, Philippe; Motte, Serge
2007-01-01
Assessment of pretest probability should be the initial step in investigation of patients with suspected pulmonary embolism (PE). In teaching hospitals physicians in training are often the first physicians to evaluate patients. To evaluate the accuracy of pretest probability assessment of PE by physicians in training using the Wells clinical model and to assess the safety of a diagnostic strategy including pretest probability assessment. 291 consecutive outpatients with clinical suspicion of PE were categorized as having a low, moderate or high pretest probability of PE by physicians in training who could take supervising physicians' advice when they deemed necessary. Then, patients were managed according to a sequential diagnostic algorithm including D-dimer testing, lung scan, leg compression ultrasonography and helical computed tomography. Patients in whom PE was deemed absent were followed up for 3 months. 34 patients (18%) had PE. Prevalence of PE in the low, moderate and high pretest probability groups categorized by physicians in training alone was 3% (95% confidence interval (CI): 1% to 9%), 31% (95% CI: 22% to 42%) and 100% (95% CI: 61% to 100%) respectively. One of the 152 untreated patients (0.7%, 95% CI: 0.1% to 3.6%) developed a thromboembolic event during the 3-month follow-up period. Physicians in training can use the Wells clinical model to determine pretest probability of PE. A diagnostic strategy including the use of this model by physicians in training with access to supervising physicians' advice appears to be safe.
The FERRUM Project: Experimental Transition Probabilities of [Fe II] and Astrophysical Applications
NASA Technical Reports Server (NTRS)
Hartman, H.; Derkatch, A.; Donnelly, M. P.; Gull, T.; Hibbert, A.; Johannsson, S.; Lundberg, H.; Mannervik, S.; Norlin, L. -O.; Rostohar, D.
2002-01-01
We report on experimental transition probabilities for thirteen forbidden [Fe II] lines originating from three different metastable Fe II levels. Radiative lifetimes have been measured of two metastable states by applying a laser probing technique on a stored ion beam. Branching ratios for the radiative decay channels, i.e. M1 and E2 transitions, are derived from observed intensity ratios of forbidden lines in astrophysical spectra and compared with theoretical data. The lifetimes and branching ratios are combined to derive absolute transition probabilities, A-values. We present the first experimental lifetime values for the two Fe II levels a(sup 4)G(sub 9/2) and b(sup 2)H(sub 11/2) and A-values for 13 forbidden transitions from a(sup 6)S(sub 5/2), a(sup 4)G(sub 9/2) and b(sup 4)D(sub 7/2) in the optical region. A discrepancy between the measured and calculated values of the lifetime for the b(sup 2)H(sub 11/2) level is discussed in terms of level mixing. We have used the code CIV3 to calculate transition probabilities of the a(sup 6)D-a(sup 6)S transitions. We have also studied observational branching ratios for lines from 5 other metastable Fe II levels and compared them to calculated values. A consistency in the deviation between calibrated observational intensity ratios and theoretical branching ratios for lines in a wider wavelength region supports the use of [Fe II] lines for determination of reddening.
Chemical effects in ion mixing of a ternary system (metal-SiO2)
NASA Technical Reports Server (NTRS)
Banwell, T.; Nicolet, M.-A.; Sands, T.; Grunthaner, P. J.
1987-01-01
The mixing of Ti, Cr, and Ni thin films with SiO2 by low-temperature (- 196-25 C) irradiation with 290 keV Xe has been investigated. Comparison of the morphology of the intermixed region and the dose dependences of net metal transport into SiO2 reveals that long range motion and phase formation probably occur as separate and sequential processes. Kinetic limitations suppress chemical effects in these systems during the initial transport process. Chemical interactions influence the subsequent phase formation.
Chen, Chunyi; Yang, Huamin
2016-08-22
The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.
Pamnani, Shitaldas J; Nyitray, Alan G; Abrahamsen, Martha; Rollison, Dana E; Villa, Luisa L; Lazcano-Ponce, Eduardo; Huang, Yangxin; Borenstein, Amy; Giuliano, Anna R
2016-10-15
The purpose of this study was to assess the risk of sequential acquisition of anal human papillomavirus (HPV) infection following a type-specific genital HPV infection for the 9-valent vaccine HPV types and investigate factors associated with sequential infection among men who have sex with women (MSW). Genital and anal specimens were available for 1348 MSW participants, and HPV genotypes were detected using the Roche Linear Array assay. Sequential risk of anal HPV infection was assessed using hazard ratios (HRs) among men with prior genital infection, compared with men with no prior genital infection, in individual HPV type and grouped HPV analyses. In individual analyses, men with prior HPV 16 genital infections had a significantly higher risk of subsequent anal HPV 16 infections (HR, 4.63; 95% confidence interval [CI], 1.41-15.23). In grouped analyses, a significantly higher risk of sequential type-specific anal HPV infections was observed for any of the 9 types (adjusted HR, 2.80; 95% CI, 1.32-5.99), high-risk types (adjusted HR, 2.65; 95% CI, 1.26, 5.55), and low-risk types (adjusted HR, 5.89; 95% CI, 1.29, 27.01). MSW with prior genital HPV infections had a higher risk of a subsequent type-specific anal infection. The higher risk was not explained by sexual intercourse with female partners. Autoinoculation is a possible mechanism for the observed association. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
S Chapman, Jocelyn; Roddy, Erika; Panighetti, Anna; Hwang, Shelley; Crawford, Beth; Powell, Bethan; Chen, Lee-May
2016-12-01
Women with breast cancer who carry BRCA1 or BRCA2 mutations must also consider risk-reducing salpingo-oophorectomy (RRSO) and how to coordinate this procedure with their breast surgery. We report the factors associated with coordinated versus sequential surgery and compare the outcomes of each. Patients in our cancer risk database who had breast cancer and a known deleterious BRCA1/2 mutation before undergoing breast surgery were included. Women who chose concurrent RRSO at the time of breast surgery were compared to those who did not. Sixty-two patients knew their mutation carrier status before undergoing breast cancer surgery. Forty-three patients (69%) opted for coordinated surgeries, and 19 (31%) underwent sequential surgeries at a median follow-up of 4.4 years. Women who underwent coordinated surgery were significantly older than those who chose sequential surgery (median age of 45 vs. 39 years; P = .025). There were no differences in comorbidities between groups. Patients who received neoadjuvant chemotherapy were more likely to undergo coordinated surgery (65% vs. 37%; P = .038). Sequential surgery patients had longer hospital stays (4.79 vs. 3.44 days, P = .01) and longer operating times (8.25 vs. 6.38 hours, P = .006) than patients who elected combined surgery. Postoperative complications were minor and were no more likely in either group (odds ratio, 4.76; 95% confidence interval, 0.56-40.6). Coordinating RRSO with breast surgery is associated with receipt of neoadjuvant chemotherapy, longer operating times, and hospital stays without an observed increase in complications. In the absence of risk, surgical options can be personalized. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn
In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less
NASA Astrophysics Data System (ADS)
Abadi, P.; Otsuka, Y.; Shiokawa, K.; Yamamoto, M.; M Buhari, S.; Abdullah, M.
2017-12-01
We investigate the 3-m ionospheric irregularities and the height variation of equatorial F-region observed by the Equatorial Atmosphere Radar (EAR) at Kototabang (100.3°E, 0.2°S, dip. Lat.: 10.1°S) in Indonesia and ionosondes at Chumphon (99.3°E, 10.7°N, dip. Lat.: 3°N) in Thailand and at Bac Lieu (105.7°E, 9.3°N, dip. Lat.; 1.5°N) in Vietnam, respectively, during March-April from 2011 to 2014. We aim to disclose the relation between pre-reversal enhancement (PRE) of evening eastward electric field and the sequential occurrence of the equatorial plasma bubble (EPB) in the period of 19-22 LT. In summary, (i) we found that the zonal spacing between consecutive EPBs ranges from less than 100 km up to 800 km with a maximum occurrence around 100-300 km as shown in Figure 1(a), and this result is consistent with the previous study [e.g. Makela et al., 2010]; (ii) the probability of the sequential occurrence of the EPB enhances with the increase of PRE strength (see Figure 1(b)); and (iii) Figure 1(c) shows that the zonal spacing between consecutive EPBs is less than 300 km for the weaker PRE (<30 m/s), whereas the zonal spacing is more varied for the stronger PRE (≥30 m/s). Our results remark that the PRE strength is a prominent factor of the sequential occurrence of the EPB. However, we also consider another factor, namely the zonal structure of seed perturbation modulated by gravity wave (GW), and the zonal spacing between consecutive EPBs may fit with the wavelength of the zonal structure of seed perturbation. We particularly attribute the result (iii) to the effects of PRE and seed perturbation on the sequential occurrence of the EPB, that is, we suggest that the weaker PRE could cause the sequential occurrence of the EPB when the zonal structure of seed perturbation has a shorter wavelength. We, however, need a further investigation for confirming the periodic seeding mechanism, and we will use a network of GPS receivers in the western part of Southeast Asia to analyze the zonal wavy structure in the TEC as a manifestation of the seed perturbations.
Multiple laser pulse ignition method and apparatus
Early, James W.
1998-01-01
Two or more laser light pulses with certain differing temporal lengths and peak pulse powers can be employed sequentially to regulate the rate and duration of laser energy delivery to fuel mixtures, thereby improving fuel ignition performance over a wide range of fuel parameters such as fuel/oxidizer ratios, fuel droplet size, number density and velocity within a fuel aerosol, and initial fuel temperatures.
Organic nanoparticle systems for spatiotemporal control of multimodal chemotherapy
Meng, Fanfei; Han, Ning; Yeo, Yoon
2017-01-01
Introduction Chemotherapeutic drugs are used in combination to target multiple mechanisms involved in cancer cell survival and proliferation. Carriers are developed to deliver drug combinations to common target tissues in optimal ratios and desirable sequences. Nanoparticles (NP) have been a popular choice for this purpose due to their ability to increase the circulation half-life and tumor accumulation of a drug. Areas covered We review organic NP carriers based on polymers, proteins, peptides, and lipids for simultaneous delivery of multiple anticancer drugs, drug/sensitizer combinations, drug/photodynamic- or photothermal therapy combinations, and drug/gene therapeutics with examples in the past three years. Sequential delivery of drug combinations, based on either sequential administration or built-in release control, is introduced with an emphasis on the mechanistic understanding of such control. Expert opinion Recent studies demonstrate how a drug carrier can contribute to co-localizing drug combinations in optimal ratios and dosing sequences to maximize the synergistic effects. We identify several areas for improvement in future research, including the choice of drug combinations, circulation stability of carriers, spatiotemporal control of drug release, and the evaluation and clinical translation of combination delivery. PMID:27476442
Chrischilles, Elizabeth A; Gagne, Joshua J; Fireman, Bruce; Nelson, Jennifer; Toh, Sengwee; Shoaibi, Azadeh; Reichman, Marsha E; Wang, Shirley; Nguyen, Michael; Zhang, Rongmei; Izem, Rima; Goulding, Margie R; Southworth, Mary Ross; Graham, David J; Fuller, Candace; Katcoff, Hannah; Woodworth, Tiffany; Rogers, Catherine; Saliga, Ryan; Lin, Nancy D; McMahill-Walraven, Cheryl N; Nair, Vinit P; Haynes, Kevin; Carnahan, Ryan M
2018-03-01
The US Food and Drug Administration's Sentinel system developed tools for sequential surveillance. In patients with non-valvular atrial fibrillation, we sequentially compared outcomes for new users of rivaroxaban versus warfarin, employing propensity score matching and Cox regression. A total of 36 173 rivaroxaban and 79 520 warfarin initiators were variable-ratio matched within 2 monitoring periods. Statistically significant signals were observed for ischemic stroke (IS) (first period) and intracranial hemorrhage (ICH) (second period) favoring rivaroxaban, and gastrointestinal bleeding (GIB) (second period) favoring warfarin. In follow-up analyses using primary position diagnoses from inpatient encounters for increased definition specificity, the hazard ratios (HR) for rivaroxaban vs warfarin new users were 0.61 (0.47, 0.79) for IS, 1.47 (1.29, 1.67) for GIB, and 0.71 (0.50, 1.01) for ICH. For GIB, the HR varied by age: <66 HR = 0.88 (0.60, 1.30) and 66+ HR = 1.49 (1.30, 1.71). This study demonstrates the capability of Sentinel to conduct prospective safety monitoring and raises no new concerns about rivaroxaban safety. Copyright © 2018 John Wiley & Sons, Ltd.
Accurate masking technology for high-resolution powder blasting
NASA Astrophysics Data System (ADS)
Pawlowski, Anne-Gabrielle; Sayah, Abdeljalil; Gijs, Martin A. M.
2005-07-01
We have combined eroding 10 µm diameter Al2O3 particles with a new masking technology to realize the smallest and most accurate possible structures by powder blasting. Our masking technology is based on the sequential combination of two polymers:(i) the brittle epoxy resin SU8 for its photosensitivity and (ii) the elastic and thermocurable poly-dimethylsiloxane for its large erosion resistance. We have micropatterned various types of structures with a minimum width of 20 µm for test structures with an aspect ratio of 1, and 50 µm for test structures with an aspect ratio of 2.
Remote measurement of ClO in the stratosphere
NASA Technical Reports Server (NTRS)
Menzies, R. T.
1979-01-01
ClO has been detected in the stratosphere from observations of the solar spectrum in the infrared, in a small spectral interval near 12 micrometers. The observations were made with a balloon-borne laser heterodyne radiometer, launched from Palestine, Texas on September 20. By comparing high sun spectra with a number of sequential spectra taken during sunset, an altitude profile has been calculated in the 29-38 km altitude range. The results show a peak mixing ratio in excess of one ppb above 34 km, and a rapid decrease in mixing ratio with decreasing altitude below 34 km.
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
NASA Astrophysics Data System (ADS)
Renner, Heather M.; Drummond, Brie A.; Benson, Anna-Marie; Paredes, Rosana
2014-11-01
Reproductive success is one of the most easily-measured and widely studied demographic parameters of colonial nesting seabirds. Nevertheless, factors affecting the sequential stages (egg laying, incubation, chick-rearing) of reproductive success are less understood. We investigated the separate sequential stages of reproductive success in piscivorous black-legged kittiwakes (Rissa tridactyla) and thick-billed murres (Uria lomvia) using a 36-year dataset (1975-2010) on the major Pribilof Islands (St. Paul and St. George), which have recently had contrasting population trajectories. Our objectives were to evaluate how the proportion of successful nests varied among stages, and to quantify factors influencing the probability of nest success at each stage in each island. We modeled the probability of nest success at each stage using General Linear Mixed Models incorporating broad-scale and local climate variables, and diet as covariates as well as other measures of reproduction such as timing of breeding and reproductive output in the previous year and previous stage. For both species we found: (1) Success in previous stages of the breeding cycle and success in the prior year better explained overall success than any environmental variables. Phenology was also an important predictor of laying success for kittiwakes. (2) Fledging success was lower when chick diets contained oceanic fish found farther from the colonies and small invertebrates, rather than coastal fish species. (3) Differences in reproductive variables at St. Paul and St. George islands did not correspond to population trends between the two islands. Our results highlight the potential importance of adult condition and annual survival to kittiwake and murre productivity and ultimately, populations. Adult condition carrying over from the previous year ultimately seems to drive annual breeding success in a cascade effect. Furthermore, condition and survival appear to be important contributors to population dynamics at each island. Therefore, adult condition and survival prior to breeding, and factors that influence these parameters such as foraging conditions in the non-breeding season, may be important datasets for understanding drivers of seabird demography at the Pribilof Islands.
Adjuvant tamoxifen and exemestane in early breast cancer (TEAM): a randomised phase 3 trial.
van de Velde, Cornelis J H; Rea, Daniel; Seynaeve, Caroline; Putter, Hein; Hasenburg, Annette; Vannetzel, Jean-Michel; Paridaens, Robert; Markopoulos, Christos; Hozumi, Yasuo; Hille, Elysee T M; Kieback, Dirk G; Asmar, Lina; Smeets, Jan; Nortier, Johan W R; Hadji, Peyman; Bartlett, John M S; Jones, Stephen E
2011-01-22
Aromatase inhibitors improved disease-free survival compared with tamoxifen when given as an initial adjuvant treatment or after 2-3 years of tamoxifen to postmenopausal women with hormone-receptor-positive breast cancer. We therefore compared the long-term effects of exemestane monotherapy with sequential treatment (tamoxifen followed by exemestane). The Tamoxifen Exemestane Adjuvant Multinational (TEAM) phase 3 trial was conducted in hospitals in nine countries. Postmenopausal women (median age 64 years, range 35-96) with hormone-receptor-positive breast cancer were randomly assigned in a 1:1 ratio to open-label exemestane (25 mg once a day, orally) alone or following tamoxifen (20 mg once a day, orally) for 5 years. Randomisation was by use of a computer-generated random permuted block method. The primary endpoint was disease-free survival (DFS) at 5 years. Main analyses were by intention to treat. The trial is registered with ClinicalTrials.gov, NCT00279448, NCT00032136, and NCT00036270; NTR 267; Ethics Commission Trial27/2001; and UMIN, C000000057. 9779 patients were assigned to sequential treatment (n=4875) or exemestane alone (n=4904), and 4868 and 4898 were analysed by intention to treat, respectively. 4154 (85%) patients in the sequential group and 4186 (86%) in the exemestane alone group were disease free at 5 years (hazard ratio 0·97, 95% CI 0·88-1·08; p=0·60). In the safety analysis, sequential treatment was associated with a higher incidence of gynaecological symptoms (942 [20%] of 4814 vs 523 [11%] of 4852), venous thrombosis (99 [2%] vs 47 [1%]), and endometrial abnormalities (191 [4%] vs 19 [<1%]) than was exemestane alone. Musculoskeletal adverse events (2448 [50%] vs 2133 [44%]), hypertension (303 [6%] vs 219 [5%]), and hyperlipidaemia (230 [5%] vs 136 [3%]) were reported more frequently with exemestane alone. Treatment regimens of exemestane alone or after tamoxifen might be judged to be appropriate options for postmenopausal women with hormone-receptor-positive early breast cancer. Pfizer. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido
2016-09-01
Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Selection of a cardiac surgery provider in the managed care era.
Shahian, D M; Yip, W; Westcott, G; Jacobson, J
2000-11-01
Many health planners promote the use of competition to contain cost and improve quality of care. Using a standard econometric model, we examined the evidence for "value-based" cardiac surgery provider selection in eastern Massachusetts, where there is significant competition and managed care penetration. McFadden's conditional logit model was used to study cardiac surgery provider selection among 6952 patients and eight metropolitan Boston hospitals in 1997. Hospital predictor variables included beds, cardiac surgery case volume, objective clinical and financial performance, reputation (percent out-of-state referrals, cardiac residency program), distance from patient's home to hospital, and historical referral patterns. Subgroup analyses were performed for each major payer category. Distance from patient's home to hospital (odds ratio 0.90; P =.000) and the historical referral pattern from each patient's hometown (z = 45.305; P =.000) were important predictors in all models. A cardiac surgery residency enhanced the probability of selection (odds ratio 5.25; P =.000), as did percent out-of-state referrals (odds ratio 1.10; P =.001). Higher mortality rates were associated with decreased probability of selection (odds ratio 0.51; P =.027), but higher length of stay was paradoxically associated with greater probability (odds ratio 1.72; P =.000). Total hospital costs were irrelevant (odds ratio 1.00; P =.179). When analyzed by payer subgroup, Medicare patients appeared to select hospitals with both low mortality (odds ratio 0.43; P =.176) and short length of stay (odds ratio 0.76; P =.213), although the results did not achieve statistical significance. The commercial managed care subgroup exhibited the least "value-based" behavior. The odds ratio for length of stay was the highest of any group (odds ratio = 2.589; P =.000) and there was a subset of hospitals for which higher mortality was actually associated with greater likelihood of selection. The observable determinants of cardiac surgery provider selection are related to hospital reputation, historical referral patterns, and patient proximity, not objective clinical or cost performance. The paradoxic behavior of commercial managed care probably results from unobserved choice factors that are not primarily based on objective provider performance.
Chirgwin, Jacquie H; Giobbie-Hurder, Anita; Coates, Alan S; Price, Karen N; Ejlertsen, Bent; Debled, Marc; Gelber, Richard D; Goldhirsch, Aron; Smith, Ian; Rabaglio, Manuela; Forbes, John F; Neven, Patrick; Láng, István; Colleoni, Marco; Thürlimann, Beat
2016-07-20
To investigate adherence to endocrine treatment and its relationship with disease-free survival (DFS) in the Breast International Group (BIG) 1-98 clinical trial. The BIG 1-98 trial is a double-blind trial that randomly assigned 6,193 postmenopausal women with hormone receptor-positive early breast cancer in the four-arm option to 5 years of tamoxifen (Tam), letrozole (Let), or the agents in sequence (Let-Tam, Tam-Let). This analysis included 6,144 women who received at least one dose of study treatment. Conditional landmark analyses and marginal structural Cox proportional hazards models were used to evaluate the relationship between DFS and treatment adherence (persistence [duration] and compliance with dosage). Competing risks regression was used to assess demographic, disease, and treatment characteristics of the women who stopped treatment early because of adverse events. Both aspects of low adherence (early cessation of letrozole and a compliance score of < 90%) were associated with reduced DFS (multivariable model hazard ratio, 1.45; 95% CI, 1.09 to 1.93; P = .01; and multivariable model hazard ratio, 1.61; 95% CI, 1.08 to 2.38; P = .02, respectively). Sequential treatments were associated with higher rates of nonpersistence (Tam-Let, 20.8%; Let-Tam, 20.3%; Tam 16.9%; Let 17.6%). Adverse events were the reason for most trial treatment early discontinuations (82.7%). Apart from sequential treatment assignment, reduced adherence was associated with older age, smoking, node negativity, or prior thromboembolic event. Both persistence and compliance are associated with DFS. Toxicity management and, for sequential treatments, patient and physician awareness, may improve adherence. © 2016 by American Society of Clinical Oncology.
Kawakami, Hiromasa; Mihara, Takahiro; Nakamura, Nobuhito; Ka, Koui; Goto, Takahisa
2018-01-01
Magnesium has been investigated as an adjuvant for neuraxial anesthesia, but the effect of caudal magnesium on postoperative pain is inconsistent. The aim of this systematic review and meta-analysis was to evaluate the analgesic effect of caudal magnesium. We searched six databases, including trial registration sites. Randomized clinical trials reporting the effect of caudal magnesium on postoperative pain after general anesthesia were eligible. The risk ratio for use of rescue analgesics after surgery was combined using a random-effects model. We also assessed adverse events. The I2 statistic was used to assess heterogeneity. We assessed risk of bias with Cochrane domains. We controlled type I and II errors due to sparse data and repetitive testing with Trial Sequential Analysis. We assessed the quality of evidence with GRADE. Four randomized controlled trials (247 patients) evaluated the need for rescue analgesics. In all four trials, 50 mg of magnesium was administered with caudal ropivacaine. The results suggested that the need for rescue analgesia was reduced significantly by caudal magnesium administration (risk ratio 0.45; 95% confidence interval 0.24-0.86). There was considerable heterogeneity as indicated by an I2 value of 62.5%. The Trial Sequential Analysis-adjusted confidence interval was 0.04-5.55, indicating that further trials are required. The quality of evidence was very low. The rate of adverse events was comparable between treatment groups. Caudal magnesium may reduce the need for rescue analgesia after surgery, but further randomized clinical trials with a low risk of bias and a low risk of random errors are necessary to assess the effect of caudal magnesium on postoperative pain and adverse events. University Hospital Medical Information Network Clinical Trials Registry UMIN000025344.
Giobbie-Hurder, Anita; Coates, Alan S.; Price, Karen N.; Ejlertsen, Bent; Debled, Marc; Gelber, Richard D.; Goldhirsch, Aron; Smith, Ian; Rabaglio, Manuela; Forbes, John F.; Neven, Patrick; Láng, István; Colleoni, Marco; Thürlimann, Beat
2016-01-01
Purpose To investigate adherence to endocrine treatment and its relationship with disease-free survival (DFS) in the Breast International Group (BIG) 1-98 clinical trial. Methods The BIG 1-98 trial is a double-blind trial that randomly assigned 6,193 postmenopausal women with hormone receptor–positive early breast cancer in the four-arm option to 5 years of tamoxifen (Tam), letrozole (Let), or the agents in sequence (Let-Tam, Tam-Let). This analysis included 6,144 women who received at least one dose of study treatment. Conditional landmark analyses and marginal structural Cox proportional hazards models were used to evaluate the relationship between DFS and treatment adherence (persistence [duration] and compliance with dosage). Competing risks regression was used to assess demographic, disease, and treatment characteristics of the women who stopped treatment early because of adverse events. Results Both aspects of low adherence (early cessation of letrozole and a compliance score of < 90%) were associated with reduced DFS (multivariable model hazard ratio, 1.45; 95% CI, 1.09 to 1.93; P = .01; and multivariable model hazard ratio, 1.61; 95% CI, 1.08 to 2.38; P = .02, respectively). Sequential treatments were associated with higher rates of nonpersistence (Tam-Let, 20.8%; Let-Tam, 20.3%; Tam 16.9%; Let 17.6%). Adverse events were the reason for most trial treatment early discontinuations (82.7%). Apart from sequential treatment assignment, reduced adherence was associated with older age, smoking, node negativity, or prior thromboembolic event. Conclusion Both persistence and compliance are associated with DFS. Toxicity management and, for sequential treatments, patient and physician awareness, may improve adherence. PMID:27217455
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
NASA Technical Reports Server (NTRS)
Howard, R. A.; North, D. W.; Pezier, J. P.
1975-01-01
A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.
Hold it! The influence of lingering rewards on choice diversification and persistence.
Schulze, Christin; van Ravenzwaaij, Don; Newell, Ben R
2017-11-01
Learning to choose adaptively when faced with uncertain and variable outcomes is a central challenge for decision makers. This study examines repeated choice in dynamic probability learning tasks in which outcome probabilities changed either as a function of the choices participants made or independently of those choices. This presence/absence of sequential choice-outcome dependencies was implemented by manipulating a single task aspect between conditions: the retention/withdrawal of reward across individual choice trials. The study addresses how people adapt to these learning environments and to what extent they engage in 2 choice strategies often contrasted as paradigmatic examples of striking violation of versus nominal adherence to rational choice: diversification and persistent probability maximizing, respectively. Results show that decisions approached adaptive choice diversification and persistence when sufficient feedback was provided on the dynamic rules of the probabilistic environments. The findings of divergent behavior in the 2 environments indicate that diversified choices represented a response to the reward retention manipulation rather than to the mere variability of outcome probabilities. Choice in both environments was well accounted for by the generalized matching law, and computational modeling-based strategy analyses indicated that adaptive choice arose mainly from reliance on reinforcement learning strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe
2014-11-01
To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang
2018-04-01
The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.
Olsen, Morten; Hjortdal, Vibeke E; Mortensen, Laust H; Christensen, Thomas D; Sørensen, Henrik T; Pedersen, Lars
2011-04-01
Congenital heart defect patients may experience neurodevelopmental impairment. We investigated their educational attainments from basic schooling to higher education. Using administrative databases, we identified all Danish patients with a cardiac defect diagnosis born from 1 January, 1977 to 1 January, 1991 and alive at age 13 years. As a comparison cohort, we randomly sampled 10 persons per patient. We obtained information on educational attainment from Denmark's Database for Labour Market Research. The study population was followed until achievement of educational levels, death, emigration, or 1 January, 2006. We estimated the hazard ratio of attaining given educational levels, conditional on completing preceding levels, using discrete-time Cox regression and adjusting for socio-economic factors. Analyses were repeated for a sub-cohort of patients and controls born at term and without extracardiac defects or chromosomal anomalies. We identified 2986 patients. Their probability of completing compulsory basic schooling was approximately 10% lower than that of control individuals (adjusted hazard ratio = 0.79, ranged from 0.75 to 0.82 0.79; 95% confidence interval: 0.75-0.82). Their subsequent probability of completing secondary school was lower than that of the controls, both for all patients (adjusted hazard ratio = 0.74; 95% confidence interval: 0.69-0.80) and for the sub-cohort (adjusted hazard ratio = 0.80; 95% confidence interval: 0.73-0.86). The probability of attaining a higher degree, conditional on completion of youth education, was affected both for all patients (adjusted hazard ratio = 0.88; 95% confidence interval: 0.76-1.01) and for the sub-cohort (adjusted hazard ratio = 0.92; 95% confidence interval: 0.79-1.07). The probability of educational attainment was reduced among long-term congenital heart defect survivors.
Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Löck, Steffen; Haase, Robert; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniel; Perez, Damien; Lühr, Armin; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Perrin, Rosalind; Richter, Christian
2015-01-01
To determine by treatment plan comparison differences in toxicity risk reduction for patients with head and neck squamous cell carcinoma (HNSCC) from proton therapy either used for complete treatment or sequential boost treatment only. For 45 HNSCC patients, intensity-modulated photon (IMXT) and proton (IMPT) treatment plans were created including a dose escalation via simultaneous integrated boost with a one-step adaptation strategy after 25 fractions for sequential boost treatment. Dose accumulation was performed for pure IMXT treatment, pure IMPT treatment and for a mixed modality treatment with IMXT for the elective target followed by a sequential boost with IMPT. Treatment plan evaluation was based on modern normal tissue complication probability (NTCP) models for mucositis, xerostomia, aspiration, dysphagia, larynx edema and trismus. Individual NTCP differences between IMXT and IMPT (∆NTCPIMXT-IMPT) as well as between IMXT and the mixed modality treatment (∆NTCPIMXT-Mix) were calculated. Target coverage was similar in all three scenarios. NTCP values could be reduced in all patients using IMPT treatment. However, ∆NTCPIMXT-Mix values were a factor 2-10 smaller than ∆NTCPIMXT-IMPT. Assuming a threshold of ≥ 10% NTCP reduction in xerostomia or dysphagia risk as criterion for patient assignment to IMPT, less than 15% of the patients would be selected for a proton boost, while about 50% would be assigned to pure IMPT treatment. For mucositis and trismus, ∆NTCP ≥ 10% occurred in six and four patients, respectively, with pure IMPT treatment, while no such difference was identified with the proton boost. The use of IMPT generally reduces the expected toxicity risk while maintaining good tumor coverage in the examined HNSCC patients. A mixed modality treatment using IMPT solely for a sequential boost reduces the risk by 10% only in rare cases. In contrast, pure IMPT treatment may be reasonable for about half of the examined patient cohort considering the toxicities xerostomia and dysphagia, if a feasible strategy for patient anatomy changes is implemented.
Rollero, Stephanie; Bloem, Audrey; Ortiz-Julien, Anne; Camarasa, Carole; Divol, Benoit
2018-01-01
The sequential inoculation of non- Saccharomyces yeasts and Saccharomyces cerevisiae in grape juice is becoming an increasingly popular practice to diversify wine styles and/or to obtain more complex wines with a peculiar microbial footprint. One of the main interactions is competition for nutrients, especially nitrogen sources, that directly impacts not only fermentation performance but also the production of aroma compounds. In order to better understand the interactions taking place between non-Saccharomyces yeasts and S. cerevisiae during alcoholic fermentation, sequential inoculations of three yeast species ( Pichia burtonii, Kluyveromyces marxianus, Zygoascus meyerae ) with S. cerevisiae were performed individually in a synthetic medium. Different species-dependent interactions were evidenced. Indeed, the three sequential inoculations resulted in three different behaviors in terms of growth. P. burtonii and Z. meyerae declined after the inoculation of S. cerevisiae which promptly outcompeted the other two species. However, while the presence of P. burtonii did not impact the fermentation kinetics of S. cerevisiae , that of Z. meyerae rendered the overall kinetics very slow and with no clear exponential phase. K. marxianus and S. cerevisiae both declined and became undetectable before fermentation completion. The results also demonstrated that yeasts differed in their preference for nitrogen sources. Unlike Z. meyerae and P. burtonii, K. marxianus appeared to be a competitor for S. cerevisiae (as evidenced by the uptake of ammonium and amino acids), thereby explaining the resulting stuck fermentation. Nevertheless, the results suggested that competition for other nutrients (probably vitamins) occurred during the sequential inoculation of Z. meyerae with S. cerevisiae . The metabolic footprint of the non- Saccharomyces yeasts determined after 48 h of fermentation remained until the end of fermentation and combined with that of S. cerevisiae . For instance, fermentations performed with K. marxianus were characterized by the formation of phenylethanol and phenylethyl acetate, while those performed with P. burtonii or Z. meyerae displayed higher production of isoamyl alcohol and ethyl esters. When considering sequential inoculation of yeasts, the nutritional requirements of the yeasts used should be carefully considered and adjusted accordingly. Finally, our chemical data suggests that the organoleptic properties of the wine are altered in a species specific manner.
Mills, K I; Guinn, B A; Walsh, V A; Burnett, A K
1996-09-01
In chronic myeloid leukaemia (CML), disease progression from the initial chronic phase to the acute phase or blast crisis has previously been shown to be correlated with progressive increases in hyper-methylation of the calcitonin gene, located at chromosome 11p15. However, sequential studies of individual patients were not performed in these investigations. We have analysed 44 samples from nine patients with typical Philadelphia chromosome positive CML throughout their disease progression to determine the methylation state of the calcitonin gene at these time points. Densitometry was used to quantitate the ratio of the normal 2.0 kb Hpa II fragments, indicating normal methylation status of the gene, compared to the intensity of the abnormal, hyper-methylated, 2.6-3.1 kb Hpa II fragments. We found a gradual increase in the ratio of methylated:unmethylated calcitonin gene during chronic phase with a dramatic rise at blast crisis. Further, the ratio of the abnormal hypermethylated 3.1 kb fragments to the methylated 2.6 kb fragment resulted in the identification of a clonal expansion of abnormally methylated cells. This expansion of cells with hypermethylation of the calcitonin gene during chronic phase was shown to coincide with the presence of a mutation in the p53 gene. The data presented in this study would suggest that an increased methylation status of the calcitonin gene during disease progression may indicate the expansion of abnormal blast cell populations and subsequent progression to blast crisis.
Lin, Lien-Chieh; Hsu, Tzu-Herng; Huang, Kuang-Wei; Tam, Ka-Wai
2016-01-01
AIM: To evaluate the applicability of nonbismuth concomitant quadruple therapy for Helicobacter pylori (H. pylori) eradication in Chinese regions. METHODS: A systematic review and meta-analysis of randomized controlled trials was performed to evaluate the efficacy of nonbismuth concomitant quadruple therapy between sequential therapy or triple therapy for H. pylori eradication in Chinese regions. The defined Chinese regions include China, Hong Kong, Taiwan, and Singapore. The primary outcome was the H. pylori eradication rate; the secondary outcome was the compliance with therapy. The PubMed, Embase, Scopus, and Cochrane databases were searched for studies published in the period up to March 2016 with no language restriction. RESULTS: We reviewed six randomized controlled trials and 1616 patients. In 3 trials comparing concomitant quadruple therapy with triple therapy, the H. pylori eradication rate was significantly higher for 7-d nonbismuth concomitant quadruple therapy than for 7-d triple therapy (91.2% vs 77.9%, risk ratio = 1.17, 95%CI: 1.09-1.25). In 3 trials comparing quadruple therapy with sequential therapy, the eradication rate was not significant between groups (86.9% vs 86.0%). However, higher compliance was achieved with concomitant therapy than with sequential therapy. CONCLUSION: The H. pylori eradication rate was higher for nonbismuth concomitant quadruple therapy than for triple therapy. Moreover, higher compliance was achieved with nonbismuth concomitant quadruple therapy than with sequential therapy. Thus, nonbismuth concomitant quadruple therapy should be the first-line treatment in Chinese regions. PMID:27340362
Che, W W; Frey, H Christopher; Lau, Alexis K H
2016-08-16
A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.
Neely, J H; Keefe, D E; Ross, K L
1989-11-01
In semantic priming paradigms for lexical decisions, the probability that a word target is semantically related to its prime (the relatedness proportion) has been confounded with the probability that a target is a nonword, given that it is unrelated to its prime (the nonword ratio). This study unconfounded these two probabilities in a lexical decision task with category names as primes and with high- and low-dominance exemplars as targets. Semantic priming for high-dominance exemplars was modulated by the relatedness proportion and, to a lesser degree, by the nonword ratio. However, the nonword ratio exerted a stronger influence than did the relatedness proportion on semantic priming for low-dominance exemplars and on the nonword facilitation effect (i.e., the superiority in performance for nonword targets that follow a category name rather than a neutral XXX prime). These results suggest that semantic priming for lexical decisions is affected by both a prospective prime-generated expectancy, modulated by the relatedness proportion, and a retrospective target/prime semantic matching process, modulated by the nonword ratio.
The Importance of Practice in the Development of Statistics.
1983-01-01
RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears
Multiple laser pulse ignition method and apparatus
Early, J.W.
1998-05-26
Two or more laser light pulses with certain differing temporal lengths and peak pulse powers can be employed sequentially to regulate the rate and duration of laser energy delivery to fuel mixtures, thereby improving fuel ignition performance over a wide range of fuel parameters such as fuel/oxidizer ratios, fuel droplet size, number density and velocity within a fuel aerosol, and initial fuel temperatures. 18 figs.
Rysava, K; McGill, R A R; Matthiopoulos, J; Hopcraft, J G C
2016-07-15
Nutritional bottlenecks often limit the abundance of animal populations and alter individual behaviours; however, establishing animal condition over extended periods of time using non-invasive techniques has been a major limitation in population ecology. We test if the sequential measurement of δ(15) N values in a continually growing tissue, such as hair, can be used as a natural bio-logger akin to tree rings or ice cores to provide insights into nutritional stress. Nitrogen stable isotope ratios were measured by continuous-flow isotope-ratio mass spectrometry (IRMS) from 20 sequential segments along the tail hairs of 15 migratory wildebeest. Generalized Linear Models were used to test for variation between concurrent segments of hair from the same individual, and to compare the δ(15) N values of starved and non-starved animals. Correlations between δ(15) N values in the hair and periods of above-average energy demand during the annual cycle were tested using Generalized Additive Mixed Models. The time series of nitrogen isotope ratios in the tail hair are comparable between strands from the same individual. The most likely explanation for the pattern of (15) N enrichment between individuals is determined by life phase, and especially the energetic demands associated with reproduction. The mean δ(15) N value of starved animals was greater than that of non-starved animals, suggesting that higher δ(15) N values correlate with periods of nutritional stress. High δ(15) N values in the tail hair of wildebeest are correlated with periods of negative energy balance, suggesting they may be used as a reliable indicator of the animal's nutritional history. This technique might be applicable to other obligate grazers. Most importantly, the sequential isotopic analysis of hair offers a continuous record of the chronic condition of wildebeest (effectively converting point data into time series) and allows researchers to establish the animal's nutritional diary. © 2016 The Authors. Rapid Communications in Mass Spectrometry Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhont, J; Poels, K; Verellen, D
2015-06-15
Purpose: To evaluate the feasibility of markerless tumor tracking through the implementation of a novel dual-energy imaging approach into the clinical dynamic tracking (DT) workflow of the Vero SBRT system. Methods: Two sequential 20 s (11 Hz) fluoroscopy sequences were acquired at the start of one fraction for 7 patients treated for primary and metastatic lung cancer with DT on the Vero system. Sequences were acquired using 2 on-board kV imaging systems located at ±45° from the MV beam axis, at respectively 60 kVp (3.2 mAs) and 120 kVp (2.0 mAs). Offline, a normalized cross-correlation algorithm was applied to matchmore » the high (HE) and low energy (LE) images. Per breathing phase (inhale, exhale, maximum inhale and maximum exhale), the 5 best-matching HE and LE couples were extracted for DE subtraction. A contrast analysis according to gross tumor volume was conducted based on contrast-to-noise ratio (CNR). Improved tumor visibility was quantified using an improvement ratio. Results: Using the implanted fiducial as a benchmark, HE-LE sequence matching was effective for 13 out of 14 imaging angles. Overlying bony anatomy was removed on all DE images. With the exception of two imaging angles, the DE images showed no significantly improved tumor visibility compared to HE images, with an improvement ratio averaged over all patients of 1.46 ± 1.64. Qualitatively, it was observed that for those imaging angles that showed no significantly improved CNR, the tumor tissue could not be reliably visualized on neither HE nor DE images due to a total or partial overlap with other soft tissue. Conclusion: Dual-energy subtraction imaging by sequential orthogonal fluoroscopy was shown feasible by implementing an additional LE fluoroscopy sequence. However, for most imaging angles, DE images did not provide improved tumor visibility over single-energy images. Optimizing imaging angles is likely to improve tumor visibility and the efficacy of dual-energy imaging. This work was in part sponsored by corporate funding from BrainLAB AG.(BrainLAB AG, Feldkirchen, Germany)« less
Psychopathology among New York city public school children 6 months after September 11.
Hoven, Christina W; Duarte, Cristiane S; Lucas, Christopher P; Wu, Ping; Mandell, Donald J; Goodwin, Renee D; Cohen, Michael; Balaban, Victor; Woodruff, Bradley A; Bin, Fan; Musa, George J; Mei, Lori; Cantor, Pamela A; Aber, J Lawrence; Cohen, Patricia; Susser, Ezra
2005-05-01
Children exposed to a traumatic event may be at higher risk for developing mental disorders. The prevalence of child psychopathology, however, has not been assessed in a population-based sample exposed to different levels of mass trauma or across a range of disorders. To determine prevalence and correlates of probable mental disorders among New York City, NY, public school students 6 months following the September 11, 2001, World Trade Center attack. Survey. New York City public schools. A citywide, random, representative sample of 8236 students in grades 4 through 12, including oversampling in closest proximity to the World Trade Center site (ground zero) and other high-risk areas. Children were screened for probable mental disorders with the Diagnostic Interview Schedule for Children Predictive Scales. One or more of 6 probable anxiety/depressive disorders were identified in 28.6% of all children. The most prevalent were probable agoraphobia (14.8%), probable separation anxiety (12.3%), and probable posttraumatic stress disorder (10.6%). Higher levels of exposure correspond to higher prevalence for all probable anxiety/depressive disorders. Girls and children in grades 4 and 5 were the most affected. In logistic regression analyses, child's exposure (adjusted odds ratio, 1.62), exposure of a child's family member (adjusted odds ratio, 1.80), and the child's prior trauma (adjusted odds ratio, 2.01) were related to increased likelihood of probable anxiety/depressive disorders. Results were adjusted for different types of exposure, sociodemographic characteristics, and child mental health service use. A high proportion of New York City public school children had a probable mental disorder 6 months after September 11, 2001. The data suggest that there is a relationship between level of exposure to trauma and likelihood of child anxiety/depressive disorders in the community. The results support the need to apply wide-area epidemiological approaches to mental health assessment after any large-scale disaster.
Liu, Sonia Y; Chrystal, Peter V; Cowieson, Aaron J; Truong, Ha H; Moss, Amy F; Selle, Peter H
2017-01-01
A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05) when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05). However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027) and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042) than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P < 0.0001) than low energy (197 g/bird) or medium energy diets (663 g/bird) whilst broiler chickens were offered diets with different energy densities but high crude protein (300 g/kg) or digestible lysine (17.5 g/kg) concentrations. Choice fed birds had lower FCR (1.217 versus 1.327 g/g, P < 0.0001) and higher carcass yield (88.1 versus 87.3%, P = 0.012) than sequentially fed birds. This suggests that the dietary balance between protein and energy is essential for optimal feed conversion efficiency. The intake path of macronutrients from 10-31 days in choice and sequential feeding groups were plotted and compared with the null path if broiler chickens selected equal amounts of the three diets in the combination. Regardless of feeding regimen, the intake paths of starch and protein are very close to the null path; however, lipid and protein intake paths in choice fed birds are father from the null path than sequentially fed birds.
Chrystal, Peter V.; Cowieson, Aaron J.; Truong, Ha H.; Moss, Amy F.; Selle, Peter H.
2017-01-01
A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05) when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05). However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027) and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042) than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P < 0.0001) than low energy (197 g/bird) or medium energy diets (663 g/bird) whilst broiler chickens were offered diets with different energy densities but high crude protein (300 g/kg) or digestible lysine (17.5 g/kg) concentrations. Choice fed birds had lower FCR (1.217 versus 1.327 g/g, P < 0.0001) and higher carcass yield (88.1 versus 87.3%, P = 0.012) than sequentially fed birds. This suggests that the dietary balance between protein and energy is essential for optimal feed conversion efficiency. The intake path of macronutrients from 10–31 days in choice and sequential feeding groups were plotted and compared with the null path if broiler chickens selected equal amounts of the three diets in the combination. Regardless of feeding regimen, the intake paths of starch and protein are very close to the null path; however, lipid and protein intake paths in choice fed birds are father from the null path than sequentially fed birds. PMID:29053729
Barling, Julian; Frone, Michael R
2017-08-01
The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Measurement of Two- and Three-Nucleon Short-Range Correlation Probabilities in Nuclei
NASA Astrophysics Data System (ADS)
Egiyan, K. S.; Dashyan, N. B.; Sargsian, M. M.; Strikman, M. I.; Weinstein, L. B.; Adams, G.; Ambrozewicz, P.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Avakian, H.; Baghdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Batourine, V.; Battaglieri, M.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Biselli, A. S.; Bonner, B. E.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Bultuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Chen, S.; Cole, P. L.; Coltharp, P.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Sanctis, E. De; Devita, R.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Dharmawardane, K. V.; Djalali, C.; Dodge, G. E.; Donnelly, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gevorgyan, N. G.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hardie, J.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Hu, J.; Huertas, M.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Joo, K.; Juengst, H. G.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A.; Klusman, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Livingston, K.; Maximon, L. C.; McAleer, S.; McKinnon, B.; McNabb, J. W.; Mecking, B. A.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morrow, S. A.; Mueller, J.; Mutchler, G. S.; Nadel-Turonski, P.; Napolitano, J.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Pasyuk, E.; Peterson, C.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Suleiman, R.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thompson, R.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, J.
2006-03-01
The ratios of inclusive electron scattering cross sections of 4He, 12C, and 56Fe to 3He have been measured at 1
NASA Astrophysics Data System (ADS)
Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik
2018-05-01
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.
Biochemical transport modeling, estimation, and detection in realistic environments
NASA Astrophysics Data System (ADS)
Ortner, Mathias; Nehorai, Arye
2006-05-01
Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.
Zhou, Jinsong; Brum, Gustavo; González, Adom; Launikonis, Bradley S.; Stern, Michael D.; Ríos, Eduardo
2005-01-01
To signal cell responses, Ca2+ is released from storage through intracellular Ca2+ channels. Unlike most plasmalemmal channels, these are clustered in quasi-crystalline arrays, which should endow them with unique properties. Two distinct patterns of local activation of Ca2+ release were revealed in images of Ca2+ sparks in permeabilized cells of amphibian muscle. In the presence of sulfate, an anion that enters the SR and precipitates Ca2+, sparks became wider than in the conventional, glutamate-based solution. Some of these were “protoplatykurtic” (had a flat top from early on), suggesting an extensive array of channels that activate simultaneously. Under these conditions the rate of production of signal mass was roughly constant during the rise time of the spark and could be as high as 5 μm3 ms−1, consistent with a release current >50 pA since the beginning of the event. This pattern, called “concerted activation,” was observed also in rat muscle fibers. When sulfate was combined with a reduced cytosolic [Ca2+] (50 nM) these sparks coexisted (and interfered) with a sequential progression of channel opening, probably mediated by Ca2+-induced Ca2+ release (CICR). Sequential propagation, observed only in frogs, may require parajunctional channels, of RyR isoform β, which are absent in the rat. Concerted opening instead appears to be a property of RyR α in the amphibian and the homologous isoform 1 in the mammal. PMID:16186560
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Architectures of Kepler Planet Systems with Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Morehead, Robert C.; Ford, Eric B.
2015-12-01
The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.
Infrared observations of OB star formation in NGC 6334
NASA Technical Reports Server (NTRS)
Harvey, P. M.; Gatley, I.
1982-01-01
Infrared photometry and maps from 2 to 100 microns are presented for three of the principal far infrared sources in NGC 6334. Each region is powered by two or more very young stars. The distribution of dust and ionized gas is probably strongly affected by the presence of the embedded stars; one of the sources is a blister H II region, another has a bipolar structure, and the third exhibits asymmetric temperature structure. The presence of protostellar objects throughout the region suggests that star formation has occurred nearly simultaneously in the whole molecular cloud rather than having been triggered sequentially from within.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Vanderberg, J. D.; Woodbury, N. W.
1974-01-01
A method for rapidly examining the probable applicability of weight estimating formulae to a specific aerospace vehicle design is presented. The Multivariate Analysis Retrieval and Storage System (MARS) is comprised of three computer programs which sequentially operate on the weight and geometry characteristics of past aerospace vehicles designs. Weight and geometric characteristics are stored in a set of data bases which are fully computerized. Additional data bases are readily added to the MARS system and/or the existing data bases may be easily expanded to include additional vehicles or vehicle characteristics.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
Christiansen, P; Schlosser, A; Henriksen, O
1995-01-01
The fully relaxed water signal was used as an internal standard in a STEAM experiment to calculate the concentrations of the metabolites: N-acetylaspartate (NAA), creatine + phosphocreatine [Cr + PCr], and choline-containing metabolites (Cho) in the frontal part of the brain in 12 patients with probable Alzheimer's disease. Eight age-matched healthy volunteers served as controls. Furthermore, T1 and T2 relaxation times of the metabolites and signal ratios: NAA/Cho, NAA/[Cr + PCr], and [Cr + PCr]/Cho at four different echo times (TE) and two different repetition times (TR) were calculated. The experiments were carried out using a Siemens Helicon SP 63/84 wholebody MR-scanner at 1.5 T. The concentration of NAA was significantly lower in the patients with probable Alzheimer's disease than in the healthy volunteers. No significant difference was found for any other metabolite concentration. For the signal ratios the only statistically significant difference was that the NAA/Cho ratio at TE = 92 ms and TR = 1.6 s was lower in the patients with probable Alzheimer's disease compared with the control group. A trend towards a longer T2 relaxation time for NAA in the patients with probable Alzheimer's disease than among the healthy volunteers was found, but no significant difference was found concerning the T1 and T2 relaxation times.
Li, Xia; Kearney, Patricia M; Keane, Eimear; Harrington, Janas M; Fitzgerald, Anthony P
2017-06-01
The aim of this study was to explore levels and sociodemographic correlates of physical activity (PA) over 1 week using accelerometer data. Accelerometer data was collected over 1 week from 1075 8-11-year-old children in the cross-sectional Cork Children's Lifestyle Study. Threshold values were used to categorise activity intensity as sedentary, light, moderate or vigorous. Questionnaires collected data on demographic factors. Smoothed curves were used to display minute by minute variations. Binomial regression was used to identify factors correlated with the probability of meeting WHO 60 min moderate to vigorous PA guidelines. Overall, 830 children (mean (SD) age: 9.9(0.7) years, 56.3% boys) were included. From the binomial multiple regression analysis, boys were found more likely to meet guidelines (probability ratio 1.17, 95% CI 1.06 to 1.28) than girls. Older children were less likely to meet guidelines than younger children (probability ratio 0.91, CI 0.87 to 0.95). Normal weight children were more likely than overweight and obese children to meet guidelines (probability ratio 1.25, CI 1.16 to 1.34). Children in urban areas were more likely to meet guidelines than those in rural areas (probability ratio 1.19, CI 1.07 to 1.33). Longer daylight length days were associated with greater probability of meeting guidelines compared to shorter daylight length days. PA levels differed by individual factors including age, gender and weight status as well as by environmental factors including residence and daylight length. Less than one-quarter of children (26.8% boys, 16.2% girls) meet guidelines. Effective intervention policies are urgently needed to increase PA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, Laura; Department of Diagnostic Imaging and Radiation Oncology, Federico II University School of Medicine, Naples; Conson, Manuel
Purpose: Hypothyroidism (HT) is a frequent late side effect of Hodgkin's lymphoma (HL) therapy. The purpose of this study is to determine dose-volume constraints that correlate with functional impairment of the thyroid gland in HL patients treated with three-dimensional radiotherapy. Methods and Materials: A total of 61 consecutive patients undergoing antiblastic chemotherapy and involved field radiation treatment (median dose, 32 Gy; range, 30-36 Gy) for HL were retrospectively considered. Their median age was 28 years (range, 14-70 years). Blood levels of thyroid-stimulating hormone (TSH), free triiodo-thyronine (FT3), free thyroxine (FT4), and thyroglobulin antibody (ATG) were recorded basally and at differentmore » times after the end of therapy. For the thyroid gland, normal tissue complication probability (NTCP), dosimetric parameters, and the percentage of thyroid volume exceeding 10, 20, and 30 Gy (V10, V20, and V30) were calculated in all patients. To evaluate clinical and dosimetric factors possibly associated with HT, univariate and multivariate logistic regression analyses were performed. Results: Eight of 61 (13.1%) patients had HT before treatment and were excluded from further evaluation. At a median follow-up of 32 months (range, 6-99 months), 41.5% (22/53) of patients developed HT after treatment. Univariate analyses showed that all dosimetric factors were associated with HT (p < 0.05). On multivariate analysis, the thyroid V30 value was the single independent predictor associated with HT (p = 0.001). This parameter divided the patients into low- vs. high-risk groups: if V30 was {<=} 62.5%, the risk of developing HT was 11.5%, and if V30 was >62.5%, the risk was 70.8% (p < 0.0001). A Cox regression curve stratified by two levels of V30 value was created (odds ratio, 12.6). Conclusions: The thyroid V30 predicts the risk of developing HT after sequential chemo-radiotherapy and defines a useful constraint to consider for more accurate HL treatment planning.« less
Combat Identification with Sequential Observations, Rejection Option, and Out-of-Library Targets
2005-09-01
nature of the entities sharing the battlespace is unknown. Here CID characterizes those entities using information from a variety of sources. The goal...producing high-resolution returns with signif - icantly enhanced target to clutter (and noise) ratios through Doppler filtering and clutter...treat the subject from a natural science perspective. The following 43 subsections on the various model selection techniques are derived from these
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
NASA Astrophysics Data System (ADS)
Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.
2017-08-01
We have proposed an algorithm for the sequential construction of nonisotropic matrix elements of the collision integral, which are required to solve the nonlinear Boltzmann equation using the moments method. The starting elements of the matrix are isotropic and assumed to be known. The algorithm can be used for an arbitrary law of interactions for any ratio of the masses of colliding particles.
Application of truss analysis for the quantification of changes in fish condition
Fitzgerald, Dean G.; Nanson, Jeffrey W.; Todd, Thomas N.; Davis, Bruce M.
2002-01-01
Conservation of skeletal structure and unique body ratios in fishes facilitated the development of truss analysis as a taxonomic tool to separate physically-similar species. The methodology is predicated on the measurement of across-body distances from a sequential series of connected polygons. Changes in body shape or condition among members of the same species can be quantified with the same technique, and we conducted a feeding experiment using yellow perch (Perca flavescens) to examine the utility of this approach. Ration size was used as a surrogate for fish condition, with fish receiving either a high (3.0% body wt/d) or a low ration (0.5%). Sequentially over our 11-week experiment, replicate ration groups of fish were removed and photographed while control fish were repeatedly weighed and measured. Standard indices of condition (total lipids, weight-length ratios, Fulton's condition) were compared to truss measurements determined from digitized pictures of fish. Condition indices showed similarity between rations while truss measures from the caudal region were important for quantifying changing body shape. These findings identify truss analysis as having use beyond traditional applications. It can potentially be used as a cheap, accurate, and precise descriptor of fish condition in the lab as shown here, and we hypothesize that it would be applicable in field studies.
Stochastic mechanics of loose boundary particle transport in turbulent flow
NASA Astrophysics Data System (ADS)
Dey, Subhasish; Ali, Sk Zeeshan
2017-05-01
In a turbulent wall shear flow, we explore, for the first time, the stochastic mechanics of loose boundary particle transport, having variable particle protrusions due to various cohesionless particle packing densities. The mean transport probabilities in contact and detachment modes are obtained. The mean transport probabilities in these modes as a function of Shields number (nondimensional fluid induced shear stress at the boundary) for different relative particle sizes (ratio of boundary roughness height to target particle diameter) and shear Reynolds numbers (ratio of fluid inertia to viscous damping) are presented. The transport probability in contact mode increases with an increase in Shields number attaining a peak and then decreases, while that in detachment mode increases monotonically. For the hydraulically transitional and rough flow regimes, the transport probability curves in contact mode for a given relative particle size of greater than or equal to unity attain their peaks corresponding to the averaged critical Shields numbers, from where the transport probability curves in detachment mode initiate. At an inception of particle transport, the mean probabilities in both the modes increase feebly with an increase in shear Reynolds number. Further, for a given particle size, the mean probability in contact mode increases with a decrease in critical Shields number attaining a critical value and then increases. However, the mean probability in detachment mode increases with a decrease in critical Shields number.
Role of conviction in nonequilibrium models of opinion formation
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Anteneodo, Celia
2012-12-01
We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).
Modeling haul-out behavior of walruses in Bering Sea ice
Udevitz, M.S.; Jay, C.V.; Fischbach, Anthony S.; Garlich-Miller, J. L.
2009-01-01
Understanding haul-out behavior of ice-associated pinnipeds is essential for designing and interpreting popula-tion surveys and for assessing effects of potential changes in their ice environments. We used satellite-linked transmitters to obtain sequential information about location and haul-out state for Pacific walruses, Odobenus rosmarus divergens (Il-liger, 1815), in the Bering Sea during April of 2004, 2005, and 2006. We used these data in a generalized mixed model of haul-out bout durations and a hierarchical Bayesian model of haul-out probabilities to assess factors related to walrus haul-out behavior, and provide the first predictive model of walrus haul-out behavior in sea ice habitat. Average haul-out bout duration was 9 h, but durations of haul-out bouts tended to increase with durations of preceding in-water bouts. On aver-age, tagged walruses spent only about 17% of their time hauled out on sea ice. Probability of being hauled out decreased with wind speed, increased with temperature, and followed a diurnal cycle with the highest values in the evening. Our haul-out probability model can be used to estimate the proportion of the population that is unavailable for detection in spring surveys of Pacific walruses on sea ice.
Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R.; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo
2015-01-01
Background We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. Patients and methods This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. Results The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%–65%). Neutropenia was the most common grade ≥3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%–89%) and 85% (95%CI, 69%–93%), respectively, for the sequential-schedule. Conclusions These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC. PMID:26320185
The Evolution of Gene Regulatory Networks that Define Arthropod Body Plans.
Auman, Tzach; Chipman, Ariel D
2017-09-01
Our understanding of the genetics of arthropod body plan development originally stems from work on Drosophila melanogaster from the late 1970s and onward. In Drosophila, there is a relatively detailed model for the network of gene interactions that proceeds in a sequential-hierarchical fashion to define the main features of the body plan. Over the years, we have a growing understanding of the networks involved in defining the body plan in an increasing number of arthropod species. It is now becoming possible to tease out the conserved aspects of these networks and to try to reconstruct their evolution. In this contribution, we focus on several key nodes of these networks, starting from early patterning in which the main axes are determined and the broad morphological domains of the embryo are defined, and on to later stage wherein the growth zone network is active in sequential addition of posterior segments. The pattern of conservation of networks is very patchy, with some key aspects being highly conserved in all arthropods and others being very labile. Many aspects of early axis patterning are highly conserved, as are some aspects of sequential segment generation. In contrast, regional patterning varies among different taxa, and some networks, such as the terminal patterning network, are only found in a limited range of taxa. The growth zone segmentation network is ancient and is probably plesiomorphic to all arthropods. In some insects, it has undergone significant modification to give rise to a more hardwired network that generates individual segments separately. In other insects and in most arthropods, the sequential segmentation network has undergone a significant amount of systems drift, wherein many of the genes have changed. However, it maintains a conserved underlying logic and function. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Speciation and transformation of heavy metals during vermicomposting of animal manure.
Lv, Baoyi; Xing, Meiyan; Yang, Jian
2016-06-01
This work was conducted to evaluate the effects of vermicomposting on the speciation and mobility of heavy metals (Zn, Pb, Cr, and Cu) in cattle dung (CD) and pig manure (PM) using tessier sequential extraction method. Results showed that the pH, total organic carbon and C/N ratio were reduced, while the electric conductivity and humic acid increased after 90days vermicomposting. Moreover, the addition of earthworm could accelerate organic stabilization in vermicomposting. The total heavy metals in final vermicompost from CD and PM were higher than the initial values and the control without worms. Sequential extraction indicated that vermicomposting decreased the migration and availability of heavy metals, and the earthworm could reduce the mobile fraction, while increase the stable fraction of heavy metals. Furthermore, these results indicated that vermicomposting played a positive role in stabilizing heavy metals in the treatment of animal manure. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and evaluation of a hybrid storage system in HEP environment
NASA Astrophysics Data System (ADS)
Xu, Qi; Cheng, Yaodong; Chen, Gang
2017-10-01
Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.
Advanced Turbo-Charging Research and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2008-02-27
The objective of this project is to conduct analysis, design, procurement and test of a high pressure ratio, wide flow range, and high EGR system with two stages of turbocharging. The system needs to meet the stringent 2010MY emissions regulations at 20% + better fuel economy than its nearest gasoline competitor while allowing equivalent vehicle launch characteristics and higher torque capability than its nearest gasoline competitor. The system will also need to meet light truck/ SUV life requirements, which will require validation or development of components traditionally used only in passenger car applications. The conceived system is termed 'seriessequential turbocharger'more » because the turbocharger system operates in series at appropriate times and also sequentially when required. This is accomplished using intelligent design and control of flow passages and valves. Components of the seriessequential system will also be applicable to parallel-sequential systems which are also expected to be in use for future light truck/SUV applications.« less
Should we use closed or open infusion containers for prevention of bloodstream infections?
Rangel-Frausto, Manuel S; Higuera-Ramirez, Francisco; Martinez-Soto, Jose; Rosenthal, Victor D
2010-02-02
Hospitalized patients in critical care settings are at risk for bloodstream infections (BSI). Most BSIs originate from a central line (CL), and they increase length of stay, cost, and mortality. Open infusion containers may increase the risk of contamination and administration-related (CLAB) because they allow the entry of air into the system, thereby also providing an opportunity for microbial entry. Closed infusion containers were designed to overcome this flaw. However, open infusion containers are still widely used throughout the world.The objective of the study was to determine the effect of switching from open (glass, burettes, and semi-rigid) infusion containers to closed, fully collapsible, plastic infusion containers (Viaflex) on the rate and time to onset of central line-associated bloodstream infections CLABs. An open label, prospective cohort, active healthcare-associated infection surveillance, sequential study was conducted in four ICUs in Mexico. Centers for Disease Control National Nosocomial Infections Surveillance Systems definitions were used to define device-associated infections. A total of 1,096 adult patients who had a central line in place for >24 hours were enrolled. The CLAB rate was significantly higher during the open versus the closed container period (16.1 versus 3.2 CLAB/1000 central line days; RR = 0.20, 95% CI = 0.11-0.36, P < 0.0001). The probability of developing CLAB remained relatively constant in the closed container period (1.4% Days 2-4 to 0.5% Days 8-10), but increased in the open container period (4.9% Days 2-4 to 5.4% Days 8-10). The chance of acquiring a CLAB was significantly decreased (81%) in the closed container period (Cox proportional hazard ratio 0.19, P < 0.0001). Mortality was statistically significantly lower during the closed versus the open container period (23.4% versus 16.1%; RR = 0.69, 95% CI = 0.54-0.88, P < 0.01). Closed infusion containers significantly reduced CLAB rate, the probability of acquiring CLAB, and mortality.
Busse, Sebastian; Schwarting, Rainer K. W.
2016-01-01
The present study is part of a series of experiments, where we analyze why and how damage of the rat’s dorsal hippocampus (dHC) can enhance performance in a sequential reaction time task (SRTT). In this task, sequences of distinct visual stimulus presentations are food-rewarded in a fixed-ratio-13-schedule. Our previous study (Busse and Schwarting, 2016) had shown that rats with lesions of the dHC show substantially shorter session times and post-reinforcement pauses (PRPs) than controls, which allows for more practice when daily training is kept constant. Since sequential behavior is based on instrumental performance, a sequential benefit might be secondary to that. In order to test this hypothesis in the present study, we performed two experiments, where pseudorandom rather than sequential stimulus presentation was used in rats with excitotoxic dorsal hippocampal lesions. Again, we found enhanced performance in the lesion-group in terms of shorter session times and PRPs. During the sessions we found that the lesion-group spent less time with non-instrumental behavior (i.e., grooming, sniffing, and rearing) after prolonged instrumental training. Also, such rats showed moderate evidence for an extinction impairment under devalued food reward conditions and significant deficits in a response-outcome (R-O)-discrimination task in comparison to a control-group. These findings suggest that facilitatory effects on instrumental performance after dorsal hippocampal lesions may be primarily a result of complex behavioral changes, i.e., reductions of behavioral flexibility and/or alterations in motivation, which then result in enhanced instrumental learning. PMID:27375453
Efficacy of sequential three-step empirical therapy for chronic cough.
Yu, Li; Xu, Xianghuai; Hang, Jingqing; Cheng, Kewen; Jin, Xiaoyan; Chen, Qiang; Lv, Hanjing; Qiu, Zhongmin
2017-06-01
Empirical three-step therapy has been proved in just one hospital. This study aimed to demonstrate applicability of the sequential empirical three-step therapy for chronic cough in different clinical settings. Sequential empirical three-step therapy was given to patients with chronic cough in one tertiary and three secondary care respiratory clinics. Recruiters were initially treated with methoxyphenamine compound as the first-step therapy, followed by corticosteroids as the second-step therapy and the combination of a proton-pump inhibitor and a prokinetic agent as the third-step therapy. The efficacy of the therapy was verified according to the changes in cough symptom score between pre- and post-treatment, and compared among the different clinics. In total 155 patients in one tertiary clinic and 193 patients in secondary care clinics were recruited. The total dropout ratio is significantly higher in the secondary care clinics than that in the tertiary clinic (9.3% versus 3.2%, p = 0.023). The therapeutic success rate for cough was 38.7% at first-step therapy, 32.3% at second-step therapy and 20.0% at third-step therapy in the tertiary clinic, and comparable to corresponding 49.7%, 31.1% and 4.1% in secondary care clinics. Furthermore, the overall cough resolution rate was not significantly different (91.0% versus 85.0%, p = 0.091). However, the efficacy of the third-step therapy is much higher (20.0% versus 4.1%, p = 0.001) in the tertiary clinic than in the secondary care clinics. Sequential empirical three-step therapy is universally efficacious and useful for management of chronic cough in different clinical settings.
Kumar, Piyush; Bhattacharjee, Tanmoy; Ingle, Arvind; Maru, Girish; Krishna, C Murali
2016-10-01
Oral cancers suffer from poor 5-year survival rates, owing to late detection of the disease. Current diagnostic/screening tools need to be upgraded in view of disadvantages like invasiveness, tedious sample preparation, long output times, and interobserver variances. Raman spectroscopy has been shown to identify many disease conditions, including oral cancers, from healthy conditions. Further studies in exploring sequential changes in oral carcinogenesis are warranted. In this Raman spectroscopy study, sequential progression in experimental oral carcinogenesis in Hamster buccal pouch model was investigated using 3 approaches-ex vivo, in vivo sequential, and in vivo follow-up. In all these studies, spectral changes show lipid dominance in early stages while later stages and tumors showed increased protein to lipid ratio and nucleic acids. On similar lines, early weeks of 7,12-dimethylbenz(a)anthracene-treated and control groups showed higher overlap and low classification. The classification efficiency increased progressively, reached a plateau phase and subsequently increased up to 100% by 14 weeks. The misclassifications between treated and control spectra suggested some changes in controls as well, which was confirmed by a careful reexamination of histopathological slides. These findings suggests Raman spectroscopy may be able to identify microheterogeneity, which may often go unnoticed in conventional biochemistry wherein tissue extracts are employed, as well as in histopathology. In vivo findings, quite comparable to gold-standard supported ex vivo findings, give further proof of Raman spectroscopy being a promising label-free, noninvasive diagnostic adjunct for future clinical applications. © The Author(s) 2015.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.
Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C
2017-04-01
The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Nakashima, Kei; Aoshima, Masahiro; Ohfuji, Satoko; Yamawaki, Satoshi; Nemoto, Masahiro; Hasegawa, Shinya; Noma, Satoshi; Misawa, Masafumi; Hosokawa, Naoto; Yaegashi, Makito; Otsuka, Yoshihito
2018-03-21
It is unclear whether simultaneous administration of a 23-valent pneumococcal polysaccharide vaccine (PPSV23) and a quadrivalent influenza vaccine (QIV) produces immunogenicity in older individuals. This study tested the hypothesis that the pneumococcal antibody response elicited by simultaneous administration of PPSV23 and QIV in older individuals is not inferior to that elicited by sequential administration of PPSV23 and QIV. We performed a single-center, randomized, open-label, non-inferiority trial comprising 162 adults aged ≥65 years randomly assigned to either the simultaneous (simultaneous injections of PPSV23 and QIV) or sequential (control; PPSV23 injected 2 weeks after QIV vaccination) groups. Pneumococcal immunoglobulin G (IgG) titers of serotypes 23F, 3, 4, 6B, 14, and 19A were assessed. The primary endpoint was the serotype 23F response rate (a ≥2-fold increase in IgG concentrations 4-6 weeks after PPSV23 vaccination). With the non-inferiority margin set at 20% fewer patients, the response rate of serotype 23F in the simultaneous group (77.8%) was not inferior to that of the sequential group (77.6%; difference, 0.1%; 90% confidence interval, -10.8% to 11.1%). None of the pneumococcal IgG serotype titers were significantly different between the groups 4-6 weeks after vaccination. Simultaneous administration did not show a significant decrease in seroprotection odds ratios for H1N1, H3N2, or B/Phuket influenza strains other than B/Texas. Additionally, simultaneous administration did not increase adverse reactions. Hence, simultaneous administration of PPSV23 and QIV shows an acceptable immunogenicity that is comparable to sequential administration without an increase in adverse reactions. (This study was registered with ClinicalTrials.gov [NCT02592486]).
Diederich, Adele
2008-02-01
Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
NASA Astrophysics Data System (ADS)
Gonderman, S.; Tripathi, J. K.; Sizyuk, T.; Hassanein, A.
2017-08-01
Tungsten (W) has been selected as the divertor material in ITER based on its promising thermal and mechanical properties. Despite these advantages, continued investigation has revealed W to undergo extreme surface morphology evolution in response to relevant fusion operating conditions. These complications spur the need for further exploration of W and other innovative plasma facing components (PFCs) for future fusion devices. Recent literature has shown that alloying of W with other refractory metals, such as tantalum (Ta), results in the enhancement of key PFC properties including, but not limited to, ductility, hydrogen isotope retention, and helium ion (He+) radiation tolerance. In the present study, pure W and W-Ta alloys are exposed to simultaneous and sequential low energy, He+ and deuterium (D+) ion beam irradiations at high (1223 K) and low (523 K) temperatures. The goal of this study is to cultivate a complete understanding of the synergistic effects induced by dual and sequential ion irradiation on W and W-Ta alloy surface morphology evolution. For the dual ion beam experiments, W and W-Ta samples were subjected to four different He+: D+ ion ratios (100% He+, 60% D+ + 40% He+, 90% D+ + 10% He+ and 100% D+) having a total constant He+ fluence of 6 × 1024 ion m-2. The W and W-Ta samples both exhibit the expected damaged surfaces under the 100% He+ irradiation, but as the ratio of D+/He+ ions increases there is a clear suppression of the surface morphology at high temperatures. This observation is supported by the sequential experiments, which show a similar suppression of surface morphology when W and W-Ta samples are first exposed to low energy He+ irradiation and then exposed to subsequent low energy D+ irradiation at high temperatures. Interestingly, this morphology suppression is not observed at low temperatures, implying there is a D-W interaction mechanism which is dependent on temperature that is driving the suppression of the microstructure evolution in both the pure W and W-Ta alloys. Minor irradiation tolerance enhancement in the performance of the W-Ta samples is also observed.
Statistical Inference in Graphical Models
2008-06-17
fuse probability theory and graph theory in such a way as to permit efficient rep- resentation and computation with probability distributions. They...message passing. 59 viii 1. INTRODUCTION In approaching real-world problems, we often need to deal with uncertainty. Probability and statis- tics provide a...dynamic programming methods. However, for many sensors of interest, the signal-to-noise ratio does not allow such a treatment. Another source of
The Use of an Ultra-Compact Combustor as an Inter-Turbine Burner for Improved Engine Performance
2014-03-27
27 25 NPSS Mixed Flow Turbofan Model - Element and Link Names . . . . . . . . . 30 26 VCE with Variable Components Labeled...the power generation, Vogeler proposed the Sequential Combustion Cycle (SCC) for use in aircraft engines [13]. For a conventional turbofan with a...single combustor, thrust is a function of bypass ratio and maximum pressure and temperature in the cycle. Considering a twin spool turbofan engine as
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
1981-06-01
for a de- tection probability of PD and associated false alarm probability PFA (in dB). 21 - - - II V. REFERENCE MODEL A. INTRODUCTION In order to...space for which to choose HI . PFA = P (wI 0o)dw = Q(---) (26) j 0 Similarity, the miss probability=l-detection probability is obtained by integrating...31) = 2 (1+ (22 [()BT z] ~Z The input signal-to-noise ratio: S/N(input) - a2 (32) The probability of false alarm: PFA = Q[ tB(j-I) 1 (33) The
Lee, Hyeok-Won; Lee, Hee-Suk; Kim, Chun-Suk; Lee, Jin-Gyeom; Kim, Won-Kyo; Lee, Eun-Gyo; Lee, Hong-Weon
2018-02-28
Controlling the residual glucose concentration is important for improving productivity in L-threonine fermentation. In this study, we developed a procedure to automatically control the feeding quantity of glucose solution as a function of ammonia-water consumption rate. The feeding ratio (R C/N ) of glucose and ammonia water was predetermined via a stoichiometric approach, on the basis of glucose-ammonia water consumption rates. In a 5-L fermenter, 102 g/l L -threonine was obtained using our glucose-ammonia water combined feeding strategy, which was then successfully applied in a 500-L fermenter (89 g/l). Therefore, we conclude that an automatic combination feeding strategy is suitable for improving L-threonine production.
A protein-dependent side-chain rotamer library.
Bhuyan, Md Shariful Islam; Gao, Xin
2011-12-14
Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.
Ibrahim, Mona; Ahmed, Azza; Mohamed, Warda Yousef; El-Sayed Abu Abduo, Somaya
2015-01-01
Trauma is the leading cause of death in Americans up to 44 years old each year. Deep vein thrombosis (DVT) is a significant condition occurring in trauma, and prophylaxis is essential to the appropriate management of trauma patients. The incidence of DVT varies in trauma patients, depending on patients' risk factors, modality of prophylaxis, and methods of detection. However, compression devices and arteriovenous (A-V) foot pumps prophylaxis are recommended in trauma patients, but the efficacy and optimal use of it is not well documented in the literature. The aim of this study was to review the literature on the effect of compression devices in preventing DVT among adult trauma patients. We searched through PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for eligible studies published from 1990 until June 2014. Reviewers identified all randomized controlled trials that satisfied the study criteria, and the quality of included studies was assessed by Cochrane risk of bias tool. Five randomized controlled trials were included with a total of 1072 patients. Sequential compression devices significantly reduced the incidence of DVT in trauma patients. Also, foot pumps were more effective in reducing incidence of DVT compared with sequential compression devices. Sequential compression devices and foot pumps reduced the incidence of DVT in trauma patients. However, the evidence is limited to a small sample size and did not take into account other confounding variables that may affect the incidence of DVT in trauma patients. Future randomized controlled trials with larger probability samples to investigate the optimal use of mechanical prophylaxis in trauma patients are needed.
Mechanism of Tacrine Block at Adult Human Muscle Nicotinic Acetylcholine Receptors
Prince, Richard J.; Pennington, Richard A.; Sine, Steven M.
2002-01-01
We used single-channel kinetic analysis to study the inhibitory effects of tacrine on human adult nicotinic receptors (nAChRs) transiently expressed in HEK 293 cells. Single channel recording from cell-attached patches revealed concentration- and voltage-dependent decreases in mean channel open probability produced by tacrine (IC50 4.6 μM at −70 mV, 1.6 μM at −150 mV). Two main effects of tacrine were apparent in the open- and closed-time distributions. First, the mean channel open time decreased with increasing tacrine concentration in a voltage-dependent manner, strongly suggesting that tacrine acts as an open-channel blocker. Second, tacrine produced a new class of closings whose duration increased with increasing tacrine concentration. Concentration dependence of closed-times is not predicted by sequential models of channel block, suggesting that tacrine blocks the nAChR by an unusual mechanism. To probe tacrine's mechanism of action we fitted a series of kinetic models to our data using maximum likelihood techniques. Models incorporating two tacrine binding sites in the open receptor channel gave dramatically improved fits to our data compared with the classic sequential model, which contains one site. Improved fits relative to the sequential model were also obtained with schemes incorporating a binding site in the closed channel, but only if it is assumed that the channel cannot gate with tacrine bound. Overall, the best description of our data was obtained with a model that combined two binding sites in the open channel with a single site in the closed state of the receptor. PMID:12198092
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less
Probability matching in risky choice: the interplay of feedback and strategy availability.
Newell, Ben R; Koehler, Derek J; James, Greta; Rakow, Tim; van Ravenzwaaij, Don
2013-04-01
Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Huang, Biao; Zhao, Yongcun
2014-01-01
Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364
Metal fractionation in marine sediments acidified by enrichment of CO2: A risk assessment.
de Orte, Manoela Romanó; Bonnail, Estefanía; Sarmiento, Aguasanta M; Bautista-Chamizo, Esther; Basallote, M Dolores; Riba, Inmaculada; DelValls, Ángel; Nieto, José Miguel
2018-06-01
Carbon-capture and storage is considered to be a potential mitigation option for climate change. However, accidental leaks of CO 2 can occur, resulting in changes in ocean chemistry such as acidification and metal mobilization. Laboratory experiments were performed to provide data on the effects of CO 2 -related acidification on the chemical fractionation of metal(loid)s in marine-contaminated sediments using sequential extraction procedures. The results showed that sediments from Huelva estuary registered concentrations of arsenic, copper, lead, and zinc that surpass the probable biological effect level established by international protocols. Zinc had the greatest proportion in the most mobile fraction of the sediment. Metals in this fraction represent an environmental risk because they are weakly bound to sediment, and therefore more likely to migrate to the water column. Indeed, the concentration of this metal was lower in the most acidified scenarios when compared to control pH, indicating probable zinc mobilization from the sediment to the seawater. Copyright © 2018 Elsevier Ltd. All rights reserved.
Xie, Weizhen; Zhang, Weiwei
2017-09-01
Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
Rollero, Stephanie; Bloem, Audrey; Ortiz-Julien, Anne; Camarasa, Carole; Divol, Benoit
2018-01-01
The sequential inoculation of non-Saccharomyces yeasts and Saccharomyces cerevisiae in grape juice is becoming an increasingly popular practice to diversify wine styles and/or to obtain more complex wines with a peculiar microbial footprint. One of the main interactions is competition for nutrients, especially nitrogen sources, that directly impacts not only fermentation performance but also the production of aroma compounds. In order to better understand the interactions taking place between non-Saccharomyces yeasts and S. cerevisiae during alcoholic fermentation, sequential inoculations of three yeast species (Pichia burtonii, Kluyveromyces marxianus, Zygoascus meyerae) with S. cerevisiae were performed individually in a synthetic medium. Different species-dependent interactions were evidenced. Indeed, the three sequential inoculations resulted in three different behaviors in terms of growth. P. burtonii and Z. meyerae declined after the inoculation of S. cerevisiae which promptly outcompeted the other two species. However, while the presence of P. burtonii did not impact the fermentation kinetics of S. cerevisiae, that of Z. meyerae rendered the overall kinetics very slow and with no clear exponential phase. K. marxianus and S. cerevisiae both declined and became undetectable before fermentation completion. The results also demonstrated that yeasts differed in their preference for nitrogen sources. Unlike Z. meyerae and P. burtonii, K. marxianus appeared to be a competitor for S. cerevisiae (as evidenced by the uptake of ammonium and amino acids), thereby explaining the resulting stuck fermentation. Nevertheless, the results suggested that competition for other nutrients (probably vitamins) occurred during the sequential inoculation of Z. meyerae with S. cerevisiae. The metabolic footprint of the non-Saccharomyces yeasts determined after 48 h of fermentation remained until the end of fermentation and combined with that of S. cerevisiae. For instance, fermentations performed with K. marxianus were characterized by the formation of phenylethanol and phenylethyl acetate, while those performed with P. burtonii or Z. meyerae displayed higher production of isoamyl alcohol and ethyl esters. When considering sequential inoculation of yeasts, the nutritional requirements of the yeasts used should be carefully considered and adjusted accordingly. Finally, our chemical data suggests that the organoleptic properties of the wine are altered in a species specific manner. PMID:29487584
75 FR 66271 - Assessment Dividends, Assessment Rates and Designated Reserve Ratio
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... has recovered to pre-crisis levels, and the long term, when the reserve ratio is sufficiently large... detail below, concludes that a moderate, long-term average industry assessment rate, combined with an... and earnings of IDIs. Long Term To increase the probability that the fund reserve ratio will reach a...
High spatial resolution Mg/Al maps of the western Crisium and Sulpicius Gallus regions
NASA Technical Reports Server (NTRS)
Schonfeld, E.
1982-01-01
High spatial resolution Mg/Al ratio maps of the western Crisium and Sulpicius Gallus regions of the moon are presented. The data is from the X-ray fluorescence experiment and the image enhancement technique in the Laplacian subtraction method using a special least-squares version of the Laplacian to reduce noise amplification. In the highlands region west of Mare Crisium several relatively small patches of smooth material have high local Mg/Al ratio similar to values found in mare sites, suggesting volcanism in the highlands. In the same highland region there were other smooth areas with no high Mg/Al local values and they are probably Cayley Formation material produced by impact mass wasting. The Sulpicius Gallus region has variable Mg/Al ratios. In this region there are several high Mg/Al ratio spots, two of which occur at the highland-mare interface. Another high Mg/Al ratio area corresponds to the Sulpicius Gallus Rima I region. The high Mg/Al ratio material in the Sulpicius Gallus region is probably pyroclastic.
Flavor-changing Z decays: A window to ultraheavy quarks?
NASA Astrophysics Data System (ADS)
Ganapathi, V.; Weiler, T.; Laermann, E.; Schmitt, I.; Zerwas, P. M.
1983-02-01
We study flavor-changing Z decays into quarks, Z-->Q+q¯, in the standard SU(2)×U(1) theory with sequential generations. Such decays occur in higher-order electroweak interactions, with a probability growing as the fourth power of the mass of the heaviest (virtual) quark mediating the transition. With the possible exception of Z-->bs¯, these decay modes are generally very rare in the three-generation scheme. However, with four generations Z-->b'b¯ is observable if the t' mass is a few hundred GeV. Such decay modes could thus provide a glimpse of the ultraheavy-quark spectrum.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
Sequential decision tree using the analytic hierarchy process for decision support in rectal cancer.
Suner, Aslı; Çelikoğlu, Can Cengiz; Dicle, Oğuz; Sökmen, Selman
2012-09-01
The aim of the study is to determine the most appropriate method for construction of a sequential decision tree in the management of rectal cancer, using various patient-specific criteria and treatments such as surgery, chemotherapy, and radiotherapy. An analytic hierarchy process (AHP) was used to determine the priorities of variables. Relevant criteria used in two decision steps and their relative priorities were established by a panel of five general surgeons. Data were collected via a web-based application and analyzed using the "Expert Choice" software specifically developed for the AHP. Consistency ratios in the AHP method were calculated for each set of judgments, and the priorities of sub-criteria were determined. A sequential decision tree was constructed for the best treatment decision process, using priorities determined by the AHP method. Consistency ratios in the AHP method were calculated for each decision step, and the judgments were considered consistent. The tumor-related criterion "presence of perforation" (0.331) and the patient-surgeon-related criterion "surgeon's experience" (0.630) had the highest priority in the first decision step. In the second decision step, the tumor-related criterion "the stage of the disease" (0.230) and the patient-surgeon-related criterion "surgeon's experience" (0.281) were the paramount criteria. The results showed some variation in the ranking of criteria between the decision steps. In the second decision step, for instance, the tumor-related criterion "presence of perforation" was just the fifth. The consistency of decision support systems largely depends on the quality of the underlying decision tree. When several choices and variables have to be considered in a decision, it is very important to determine priorities. The AHP method seems to be effective for this purpose. The decision algorithm developed by this method is more realistic and will improve the quality of the decision tree. Copyright © 2012 Elsevier B.V. All rights reserved.
GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel
2017-04-15
In this study, the industrial textile wastewater was treated using a chemical-based technique (coagulation-flocculation, C-F) sequential with an advanced oxidation process (AOP: Fenton or Photo-Fenton). During the C-F, Al 2 (SO 4 ) 3 was used as coagulant and its optimal dose was determined using the jar test. The following operational conditions of C-F, maximizing the organic matter removal, were determined: 700 mg/L of Al 2 (SO 4 ) 3 at pH = 9.96. Thus, the C-F allowed to remove 98% of turbidity, 48% of Chemical Oxygen Demand (COD), and let to increase in the BOD 5 /COD ratio from 0.137 to 0.212. Subsequently, the C-F effluent was treated using each of AOPs. Their performances were optimized by the Response Surface Methodology (RSM) coupled with a Box-Behnken experimental design (BBD). The following optimal conditions of both Fenton (Fe 2+ /H 2 O 2 ) and Photo-Fenton (Fe 2+ /H 2 O 2 /UV) processes were found: Fe 2+ concentration = 1 mM, H 2 O 2 dose = 2 mL/L (19.6 mM), and pH = 3. The combination of C-F pre-treatment with the Fenton reagent, at optimized conditions, let to remove 74% of COD during 90 min of the process. The C-F sequential with Photo-Fenton process let to reach 87% of COD removal, in the same time. Moreover, the BOD 5 /COD ratio increased from 0.212 to 0.68 and from 0.212 to 0.74 using Fenton and Photo-Fenton processes, respectively. Thus, the enhancement of biodegradability with the physico-chemical treatment was proved. The depletion of H 2 O 2 was monitored during kinetic study. Strategies for improving the reaction efficiency, based on the H 2 O 2 evolution, were also tested. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mihara, Takahiro; Nakamura, Nobuhito; Ka, Koui; Goto, Takahisa
2018-01-01
Background Magnesium has been investigated as an adjuvant for neuraxial anesthesia, but the effect of caudal magnesium on postoperative pain is inconsistent. The aim of this systematic review and meta-analysis was to evaluate the analgesic effect of caudal magnesium. Methods We searched six databases, including trial registration sites. Randomized clinical trials reporting the effect of caudal magnesium on postoperative pain after general anesthesia were eligible. The risk ratio for use of rescue analgesics after surgery was combined using a random-effects model. We also assessed adverse events. The I2 statistic was used to assess heterogeneity. We assessed risk of bias with Cochrane domains. We controlled type I and II errors due to sparse data and repetitive testing with Trial Sequential Analysis. We assessed the quality of evidence with GRADE. Results Four randomized controlled trials (247 patients) evaluated the need for rescue analgesics. In all four trials, 50 mg of magnesium was administered with caudal ropivacaine. The results suggested that the need for rescue analgesia was reduced significantly by caudal magnesium administration (risk ratio 0.45; 95% confidence interval 0.24–0.86). There was considerable heterogeneity as indicated by an I2 value of 62.5%. The Trial Sequential Analysis-adjusted confidence interval was 0.04–5.55, indicating that further trials are required. The quality of evidence was very low. The rate of adverse events was comparable between treatment groups. Conclusion Caudal magnesium may reduce the need for rescue analgesia after surgery, but further randomized clinical trials with a low risk of bias and a low risk of random errors are necessary to assess the effect of caudal magnesium on postoperative pain and adverse events. Trial registration University Hospital Medical Information Network Clinical Trials Registry UMIN000025344. PMID:29293586
Seaton, Sarah E; Manktelow, Bradley N
2012-07-16
Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
NASA Astrophysics Data System (ADS)
Booth, Colin J.; Vagt, Peter J.
1990-05-01
The Blackwell site in northeastern Illinois was a classic sequential-use project combining land reclamation, a sanitary landfill, and a recreational park. This paper adds a recent assessment of leachate generation and groundwater contamination to the site's unfinished record. Hydrogeological studies show that (1) the landfill sits astride an outwash aquifer and a till mound, which are separated from an underlying dolomite aquifer by a thin, silty till; (2) leachate leaks from the landfill at an estimated average rate between 48 and 78 m3/d; (3) the resultant contaminant plume is virtually stagnant in the till but rapidly diluted in the outwash aquifer, so that no off-site contamination is detected; (4) trace VOC levels in the dolomite probably indicate that contaminants have migrated there from the landfill-derived plume in the outwash. Deviations from the original landfill concepts included elimination of a leachate collection system, increased landfill size, local absence of a clay liner, and partial use of nonclay cover. The hydrogeological setting was unsuitable for the landfill as constructed, indicating the importance of detailed geological consideration in landfill and land-use planning.
Source and migration of dissolved manganese in the Central Nile Delta Aquifer, Egypt
NASA Astrophysics Data System (ADS)
Bennett, P. C.; El Shishtawy, A. M.; Sharp, J. M.; Atwia, M. G.
2014-08-01
Dissolved metals in waters in shallow deltaic sediments are one of the world's major health problems, and a prime example is arsenic contamination in Bangladesh. The Central Nile Delta Aquifer, a drinking water source for more than 6 million people, can have high concentrations of dissolved manganese (Mn). Standard hydrochemical analyses coupled with sequential chemical extraction is used to identify the source of the Mn and to identify the probable cause of the contamination. Fifty-nine municipal supply wells were sampled and the results compared with published data for groundwaters and surface waters. Drill cuttings from 4 wells were collected and analyzed by sequential chemical extraction to test the hypothesized Mn-generating processes. The data from this research show that the Mn source is not deep saline water, microbial reduction of Mn oxides at the production depth, or leakage from irrigation drainage ditches. Instead, Mn associated with carbonate minerals in the surficial confining layer and transported down along the disturbed well annulus of the municipal supply wells is the likely source. This analysis provides a basis for future hydrogeological and contaminant transport modeling as well as remediation-modification of well completion practices and pumping schedules to mitigate the problem.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Phase transition to a two-peak phase in an information-cascade voting experiment
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato; Takahashi, Taiki
2012-08-01
Observational learning is an important information aggregation mechanism. However, it occasionally leads to a state in which an entire population chooses a suboptimal option. When this occurs and whether it is a phase transition remain unanswered. To address these questions we perform a voting experiment in which subjects answer a two-choice quiz sequentially with and without information about the prior subjects’ choices. The subjects who could copy others are called herders. We obtain a microscopic rule regarding how herders copy others. Varying the ratio of herders leads to qualitative changes in the macroscopic behavior of about 50 subjects in the experiment. If the ratio is small, the sequence of choices rapidly converges to the correct one. As the ratio approaches 100%, convergence becomes extremely slow and information aggregation almost terminates. A simulation study of a stochastic model for 106 subjects based on the herder’s microscopic rule shows a phase transition to the two-peak phase, where the convergence completely terminates as the ratio exceeds some critical value.
Park, Jong-Ho; Choi, Eun-Ju
2016-11-01
A method to determine the quantity and isotopic ratios of uranium in individual micro-particles simultaneously by isotope dilution thermal ionization mass spectrometry (ID-TIMS) has been developed. This method consists of sequential sample and spike loading, ID-TIMS for isotopic measurement, and application of a series of mathematical procedures to remove the contribution of uranium in the spike. The homogeneity of evaporation and ionization of uranium content was confirmed by the consistent ratio of n((233)U)/n((238)U) determined by TIMS measurements. Verification of the method was performed using U030 solution droplets and U030 particles. Good agreements of resulting uranium quantity, n((235)U)/n((238)U), and n((236)U)/n((238)U) with the estimated or certified values showed the validity of this newly developed method for particle analysis when simultaneous determination of the quantity and isotopic ratios of uranium is required. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
NASA Astrophysics Data System (ADS)
Xu, X.; Williams, C.; Plass-Dülmer, H.; Berresheim, H.; Salisbury, G.; Lange, L.; Lelieveld, J.
2003-09-01
During the Mediterranean Intensive Oxidant Study (MINOS) campaign in August 2001 gas-phase organic compounds were measured using comprehensive two-dimensional gas chromatography (GCxGC) at the Finokalia ground station, Crete. In this paper, C7-C11 aromatic and n-alkane measurements are presented and interpreted. The mean mixing ratios of the hydrocarbons varied from 1±1 pptv (i-propylbenzene) to 43±36 pptv (toluene). The observed mixing ratios showed strong day-to-day variations and generally higher levels during the first half of the campaign. Mean diel profiles showed maxima at local midnight and late morning, and minima in the early morning and evening. Results from analysis using a simplified box model suggest that both the chemical sink (i.e. reaction with OH) and the variability of source strengths were the causes of the observed variations in hydrocarbon mixing ratios. The logarithms of hydrocarbon concentrations were negatively correlated with the OH concentrations integral over a day prior to the hydrocarbon measurements. Slopes of the regression lines derived from these correlations for different compounds are compared with literature rate constants for their reactions with OH. The slopes for most compounds agree reasonably well with the literature rate constants. A sequential reaction model has been applied to the interpretation of the relationship between ethylbenzene and two of its potential products, i.e. acetophenone and benzeneacetaldehyde. The model can explain the good correlation observed between [acetophenone]/[ethylbenzene] and [benzeneacetaldehyde]/[ethylbenzene]. The model results and field measurements suggest that the reactivity of benzeneacetaldehyde may lie between those of acetophenone and ethylbenzene and that the ratio between yields of acetophenone and benzeneacetaldehyde may be up to 28:1. Photochemical ages of trace gases sampled at Finokalia during the campaign are estimated using the sequential reaction model and related data. They lie in the range of about 0.5-2.5 days.
NASA Astrophysics Data System (ADS)
Xu, X.; Williams, J.; Plass-Dülmer, C.; Berresheim, H.; Salisbury, G.; Lange, L.; Lelieveld, J.
2003-03-01
During the Mediterranean Intensive Oxidant Study (MINOS) campaign in August 2001 gas-phase organic compounds were measured using comprehensive two-dimensional gas chromatography (GC×GC) at the Finokalia ground station, Crete. In this paper, C7-C11 aromatic and n-alkane measurements are presented and interpreted. The mean mixing ratios of the hydrocarbons varied from 1+/-1 pptv (i-propylbenzene) to 43+/-36 pptv (toluene). The observed mixing ratios showed strong day-to-day variations and generally higher levels during the first half of the campaign. Mean diel profiles showed maxima at local midnight and late morning, and minima in the early morning and evening. Results from analysis using a simplified box model suggest that both the chemical sink (i.e. reaction with OH) and the variability of source strengths were the causes of the observed variations in hydrocarbon mixing ratios. The logarithms of hydrocarbon concentrations were negatively correlated with the OH concentrations integral over a day prior to the hydrocarbon measurements. Slopes of the regression lines derived from these correlations for different compounds are compared with literature rate constants for their reactions with OH. The slopes for most compounds agree reasonably well with the literature rate constants. A sequential reaction model has been applied to the interpretation of the relationship between ethylbenzene and two of its potential products, i.e. acetophenone and benzeneacetaldehyde. The model can explain the good correlation observed between [acetophenone]/[ethylbenzene] and [benzeneacetaldehyde]/[ethylbenzene]. The model results and field measurements suggest that the reactivity of benzeneacetaldehyde may lie between those of acetophenone and ethylbenzene and that the ratio between yields of acetophenone and benzeneacetaldehyde may be up to 28:1. Photochemical ages of trace gases sampled at Finokalia during the campaign are estimated using the sequential reaction model and related data. They lie in the range of about 0.5-2.5 days.
2014-01-01
Background End-to-side anastomoses to connect the distal end of the great saphenous vein (GSV) to small target coronary arteries are commonly performed in sequential coronary artery bypass grafting (CABG). However, the oversize diameter ratio between the GSV and small target vessels at end-to-side anastomoses might induce adverse hemodynamic condition. The purpose of this study was to describe a distal end side-to-side anastomosis technique and retrospectively compare the effect of distal end side-to-side versus end-to-side anastomosis on graft flow characteristics. Methods We performed side-to-side anastomoses to connect the distal end of the GSV to small target vessels on 30 patients undergoing off-pump sequential CABG in our hospital between October 2012 and July 2013. Among the 30 patients, end-to-side anastomoses at the distal end of the GSV were initially performed on 14 patients; however, due to poor graft flow, those anastomoses were revised into side-to-side anastomoses. We retrospectively compared the intraoperative graft flow characteristics of the end-to-side versus side-to-side anastomoses in the 14 patients. The patient outcomes were also evaluated. Results We found that the side-to-side anastomosis reconstruction improved intraoperative flow and reduced pulsatility index in all the 14 patients significantly. The 16 patients who had the distal end side-to-side anastomoses performed directly also exhibited satisfactory intraoperative graft flow. Three-month postoperative outcomes for all the patients were satisfactory. Conclusions Side-to-side anastomosis at the distal end of sequential vein grafts might be a promising strategy to connect small target coronary arteries to the GSV. PMID:24884776
The PMHT: solutions for some of its problems
NASA Astrophysics Data System (ADS)
Wieneke, Monika; Koch, Wolfgang
2007-09-01
Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.
Wong, Andrew T; Shao, Meng; Rineer, Justin; Lee, Anna; Schwartz, David; Schreiber, David
2017-06-01
The objective of this study was to analyze the impact on overall survival (OS) from the addition of postoperative radiation with or without chemotherapy after esophagectomy, using a large, hospital-based dataset. Previous retrospective studies have suggested an OS advantage for postoperative chemoradiation over surgery alone, although prospective data are lacking. The National Cancer Data Base was queried to select patients diagnosed with stage pT3-4Nx-0M0 or pT1-4N1-3M0 esophageal carcinoma (squamous cell or adenocarcinoma) from 1998 to 2011 treated with definitive esophagectomy ± postoperative radiation and/or chemotherapy. OS was analyzed using the Kaplan-Meier method and compared using the log-rank test. Multivariate Cox regression analysis was used to identify covariates associated with OS. There were 4893 patients selected, of whom 1153 (23.6%) received postoperative radiation. Most patients receiving radiation also received sequential/concomitant chemotherapy (89.9%). For the entire cohort, postoperative radiation was associated with a statistically significant but modest absolute improvement in survival (hazard ratio 0.77; 95% CI, 0.71-0.83; P < 0.001). On subgroup analysis, postoperative radiation was associated with improved OS for patients with node-positive disease (3-yr OS 34.3 % vs 27.8%, P < 0.001) or positive margins (3-yr OS 36.4% vs 18.0%, P < 0.001). When chemotherapy usage was incorporated, sequential chemotherapy was associated with the best survival (P < 0.001). Multivariate analysis revealed that the addition of chemotherapy to radiation therapy, whether sequentially or concurrently, was a strong prognostic factor for OS. In this hospital-based study, the addition of postoperative chemoradiation (either sequentially or concomitantly) after esophagectomy was associated with improved OS for patients with node-positive disease or positive margins.
Spark ablation-inductively coupled plasma spectrometry for analysis of geologic materials
Golightly, D.W.; Montaser, A.; Smith, B.L.; Dorrzapf, A.F.
1989-01-01
Spark ablation-inductively coupled plasma (SA-ICP) spectrometry is applied to the measurement of hafnium-zirconium ratios in zircons and to the determination of cerium, cobalt, iron, lead, nickel and phosphorus in ferromanganese nodules. Six operating parameters used for the high-voltage spark and argon-ICP combination are established by sequential simplex optimization of both signal-to-background ratio and signal-to-noise ratio. The time-dependences of the atomic emission signals of analytes and matrix elements ablated from a finely pulverized sample embedded in a pressed disk of copper demonstrate selective sampling by the spark. Concentration ratios of hafnium to zirconium in zircons are measured with a precision of 4% (relative standard deviation, RSD). For ferromanganese nodules, spectral measurements based on intensity ratios of analyte line to the Mn(II) 257.610 nm line provide precisions of analysis in the range from 7 to 14% RSD. The accuracy of analysis depends on use of standard additions of the reference material USGS Nod P-1, and an independent measurement of the Mn concentration. ?? 1989.
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Evaluation of Phosphate Fertilizers for the Immobilization of Cd in Contaminated Soils
Yan, Yin; Zhou, Yi Qun; Liang, Cheng Hua
2015-01-01
A laboratory investigation was conducted to evaluate the efficiency of four phosphate fertilizers, including diammonium phosphate (DAP), potassium phosphate monobasic (MPP), calcium superphosphateon (SSP), and calcium phosphate tribasic (TCP), in terms of the toxicity and bioavailability of Cd in contaminated soils. The efficiency of immobilization was evaluated on the basis of two criteria: (a) the reduction of extractable Cd concentration below the TCLP regulatory level and (b) the Cd changes associated with specific operational soil fractions on the basis of sequential extraction data. Results showed that after 50 d immobilization, the extractable concentrations of Cd in DAP, MPP, SSP, and TCP treated soils decreased from 42.64 mg/kg (in the control) to 23.86, 21.86, 33.89, and 35.59 mg/kg, respectively, with immobilization efficiency in the order of MPP > DAP > SSP > TCP. Results from the assessment of Cd speciation via the sequential extraction procedure revealed that the soluble exchangeable fraction of Cd in soils treated with phosphate fertilizers, especially TCP, was considerably reduced. In addition, the reduction was correspondingly related to the increase in the more stable forms of Cd, that is, the metal bound to manganese oxides and the metal bound to crystalline iron oxides. Treatment efficiency increased as the phosphate dose (according to the molar ratio of PO4/Cd) increased. Immobilization was the most effective under the molar ratio of PO4/Cd at 4:1. PMID:25915051
Li, Yanjiao; Zhang, Sen; Yin, Yixin; Xiao, Wendong; Zhang, Jie
2017-08-10
Gas utilization ratio (GUR) is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs). In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM) named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF), depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS) to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation.
Li, Yanjiao; Yin, Yixin; Xiao, Wendong; Zhang, Jie
2017-01-01
Gas utilization ratio (GUR) is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs). In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM) named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF), depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS) to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation. PMID:28796187
Pfleger, C C H; Flachs, E M; Koch-Henriksen, Nils
2010-07-01
There is a need for follow-up studies of the familial situation of multiple sclerosis (MS) patients. To evaluate the probability of MS patients to remain in marriage or relationship with the same partner after onset of MS in comparison with the population. All 2538 Danes with onset of MS 1980-1989, retrieved from the Danish MS-Registry, and 50,760 matched and randomly drawn control persons were included. Information on family status was retrieved from Statistics Denmark. Cox analyses were used with onset as starting point. Five years after onset, the cumulative probability of remaining in the same relationship was 86% in patients vs. 89% in controls. The probabilities continued to deviate, and at 24 years, the probability was 33% in patients vs. 53% in the control persons (p < 0.001). Among patients with young onset (< 36 years of age), those with no children had a higher risk of divorce than those having children less than 7 years (Hazard Ratio 1.51; p < 0.0001), and men had a higher risk of divorce than women (Hazard Ratio 1.33; p < 0.01). MS significantly affects the probability of remaining in the same relationship compared with the background population.
A historical analysis of Plinian unrest and the key promoters of explosive activity.
NASA Astrophysics Data System (ADS)
Winson, A. E. G.; Newhall, C. G.; Costa, F.
2015-12-01
Plinian eruptions are the largest historically recorded volcanic phenomena, and have the potential to be widely destructive. Yet when a volcano becomes newly restless we are unable to anticipate whether or not a large eruption is imminent. We present the findings from a multi-parametric study of 42 large explosive eruptions (29 Plinian and 13 Sub-plinian) that form the basis for a new Bayesian Belief network that addresses this question. We combine the eruptive history of the volcanoes that have produced these large eruptions with petrological studies, and reported unrest phenomena to assess the probability of an eruption being plinian. We find that the 'plinian probability' is increased most strongly by the presence of an exsolved volatile phase in the reservoir prior to an eruption. In our survey 60% of the plinian eruptions, had an excess SO2 gas phase of more than double than it is calculated by petrologic studies alone. Probability is also increased by three related and more easily observable parameters: a high plinian Ratio (that is the ratio of VEI≥4 eruptions in a volcanoes history to the number of all VEI≥2 eruptions in the history), a repose time of more than 1000 years, and a Repose Ratio (the ratio of the average return of VEI≥4 eruptions in the volcanic record to the repose time since the last VEI≥4) of greater than 0.7. We looked for unrest signals that potentially are indicative of future plinian activity and report a few observations from case studies but cannot say if these will generally appear. Finally we present a retrospective analysis of the probabilities of eruptions in our study becoming plinian, using our Bayesian belief network. We find that these probabilities are up to about 4 times greater than those calculate from an a priori assessment of the global eruptive catalogue.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
Wu, Chia-Ching; Lin, Hsiang-Chin; Chang, Yuan-Bin; Tsai, Po-Yu; Yeh, Yu-Ying; Fan, He; Lin, King-Chuen; Francisco, J S
2011-12-21
A primary dissociation channel of Br(2) elimination is detected following a single-photon absorption of (COBr)(2) at 248 nm by using cavity ring-down absorption spectroscopy. The technique contains two laser beams propagating in a perpendicular configuration. The tunable laser beam along the axis of the ring-down cell probes the Br(2) fragment in the B(3)Π(ou)(+)-X(1)Σ(g)(+) transition. The measurements of laser energy- and pressure-dependence and addition of a Br scavenger are further carried out to rule out the probability of Br(2) contribution from a secondary reaction. By means of spectral simulation, the ratio of nascent vibrational population for v = 0, 1, and 2 levels is evaluated to be 1:(0.65 ± 0.09):(0.34 ± 0.07), corresponding to a Boltzmann vibrational temperature of 893 ± 31 K. The quantum yield of the ground state Br(2) elimination reaction is determined to be 0.11 ± 0.06. With the aid of ab initio potential energy calculations, the pathway of molecular elimination is proposed on the energetic ground state (COBr)(2) via internal conversion. A four-center dissociation mechanism is followed synchronously or sequentially yielding three fragments of Br(2) + 2CO. The resulting Br(2) is anticipated to be vibrationally hot. The measurement of a positive temperature effect supports the proposed mechanism.
Basanta, María F; de Escalada Plá, Marina F; Stortz, Carlos A; Rojas, Ana M
2013-01-30
The cell wall polysaccharides of Regina and Sunburst cherry varieties at two developmental stages were extracted sequentially, and their changes in monosaccharide composition and functional properties were studied. The loosely-attached pectins presented a lower d-galacturonic acid/rhamnose ratio than ionically-bound pectins, as well as lower thickening effects of their respective 2% aqueous solution: the lowest Newtonian viscosity and shear rate dependence during the pseudoplastic phase. The main constituents of the cell wall matrix were covalently bound pectins (probably through diferulate cross-linkings), with long arabinan side chains at the RG-I cores. This pectin domain was also anchored into the XG-cellulose elastic network. Ripening occurred with a decrease in the proportion of HGs, water extractable GGM and xylogalacturonan, and with a concomitant increase in neutral sugars. Ripening was also associated with higher viscosities and thickening effects, and to larger distribution of molecular weights. The highest firmness and compactness of Regina cherry may be associated with its higher proportion of calcium-bound HGs localized in the middle lamellae of cell walls, as well as to some higher molar proportion of NS (Rha and Ara) in covalently bound pectins. These pectins showed significantly better hydration properties than hemicellulose and cellulose network. Chemical composition and functional properties of cell wall polymers were dependent on cherry variety and ripening stage, and helped explain the contrasting firmness of Regina and Sunburst varieties. Copyright © 2012 Elsevier Ltd. All rights reserved.
Greene, Sharon K.; Kulldorff, Martin; Lewis, Edwin M.; Li, Rong; Yin, Ruihua; Weintraub, Eric S.; Fireman, Bruce H.; Lieu, Tracy A.; Nordin, James D.; Glanz, Jason M.; Baxter, Roger; Jacobsen, Steven J.; Broder, Karen R.; Lee, Grace M.
2010-01-01
The emergence of pandemic H1N1 influenza in 2009 has prompted public health responses, including production and licensure of new influenza A (H1N1) 2009 monovalent vaccines. Safety monitoring is a critical component of vaccination programs. As proof-of-concept, the authors mimicked near real-time prospective surveillance for prespecified neurologic and allergic adverse events among enrollees in 8 medical care organizations (the Vaccine Safety Datalink Project) who received seasonal trivalent inactivated influenza vaccine during the 2005/06–2007/08 influenza seasons. In self-controlled case series analysis, the risk of adverse events in a prespecified exposure period following vaccination was compared with the risk in 1 control period for the same individual either before or after vaccination. In difference-in-difference analysis, the relative risk in exposed versus control periods each season was compared with the relative risk in previous seasons since 2000/01. The authors used Poisson-based analysis to compare the risk of Guillian-Barré syndrome following vaccination in each season with that in previous seasons. Maximized sequential probability ratio tests were used to adjust for repeated analyses on weekly data. With administration of 1,195,552 doses to children under age 18 years and 4,773,956 doses to adults, no elevated risk of adverse events was identified. Near real-time surveillance for selected adverse events can be implemented prospectively to rapidly assess seasonal and pandemic influenza vaccine safety. PMID:19965887
NASA Astrophysics Data System (ADS)
Qiao, C. Y.; Wei, H. L.; Ma, C. W.; Zhang, Y. L.; Wang, S. S.
2015-07-01
Background: The isobaric yield ratio difference (IBD) method is found to be sensitive to the density difference of neutron-rich nucleus induced reaction around the Fermi energy. Purpose: An investigation is performed to study the IBD results in the transport model. Methods: The antisymmetric molecular dynamics (AMD) model plus the sequential decay model gemini are adopted to simulate the 140 A MeV 58 ,64Ni +9Be reactions. A relative small coalescence radius Rc= 2.5 fm is used for the phase space at t = 500 fm/c to form the hot fragment. Two limitations on the impact parameter (b 1 =0 -2 fm and b 2 =0 -9 fm) are used to study the effect of central collisions in IBD. Results: The isobaric yield ratios (IYRs) for the large-A fragments are found to be suppressed in the symmetric reaction. The IBD results for fragments with neutron excess I = 0 and 1 are obtained. A small difference is found in the IBDs with the b 1 and b 2 limitations in the AMD simulated reactions. The IBD with b 1 and b 2 are quite similar in the AMD + GEMINI simulated reactions. Conclusions: The IBDs for the I =0 and 1 chains are mainly determined by the central collisions, which reflects the nuclear density in the core region of the reaction system. The increasing part of the IBD distribution is found due to the difference between the densities in the peripheral collisions of the reactions. The sequential decay process influences the IBD results. The AMD + GEMINI simulation can better reproduce the experimental IBDs than the AMD simulation.
Aragón-Sánchez, J; Lipsky, Benjamin A; Lázaro-Martínez, J L
2011-02-01
To investigate the accuracy of the sequential combination of the probe-to-bone test and plain X-rays for diagnosing osteomyelitis in the foot of patients with diabetes. We prospectively compiled data on a series of 338 patients with diabetes with 356 episodes of foot infection who were hospitalized in the Diabetic Foot Unit of La Paloma Hospital from 1 October 2002 to 31 April 2010. For each patient we did a probe-to-bone test at the time of the initial evaluation and then obtained plain X-rays of the involved foot. All patients with positive results on either the probe-to-bone test or plain X-ray underwent an appropriate surgical procedure, which included obtaining a bone specimen that was processed for histology and culture. We calculated the sensitivity, specificity, predictive values and likelihood ratios of the procedures, using the histopathological diagnosis of osteomyelitis as the criterion standard. Overall, 72.4% of patients had histologically proven osteomyelitis, 85.2% of whom had positive bone culture. The performance characteristics of both the probe-to-bone test and plain X-rays were excellent. The sequential diagnostic approach had a sensitivity of 0.97, specificity of 0.92, positive predictive value of 0.97, negative predictive value of 0.93, positive likelihood ratio of 12.8 and negative likelihood ratio of 0.02. Only 6.6% of patients with negative results on both diagnostic studies had osteomyelitis. Clinicians seeing patients in a setting similar to ours (specialized diabetic foot unit with a high prevalence of osteomyelitis) can confidently diagnose diabetic foot osteomyelitis when either the probe-to-bone test or a plain X-ray, or especially both, are positive. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Gas selectivity of SILAR grown CdS nano-bulk junction
NASA Astrophysics Data System (ADS)
Jayakrishnan, R.; Nair, Varun G.; Anand, Akhil M.; Venugopal, Meera
2018-03-01
Nano-particles of cadmium sulphide were deposited on cleaned copper substrate by an automated sequential ionic layer adsorption reaction (SILAR) system. The grown nano-bulk junction exhibits Schottky diode behavior. The response of the nano-bulk junction was investigated under oxygen and hydrogen atmospheric conditions. The gas response ratio was found to be 198% for Oxygen and 34% for Hydrogen at room temperature. An increase in the operating temperature of the nano-bulk junction resulted in a decrease in their gas response ratio. A logarithmic dependence on the oxygen partial pressure to the junction response was observed, indicating a Temkin isothermal behavior. Work function measurements using a Kelvin probe demonstrate that the exposure to an oxygen atmosphere fails to effectively separate the charges due to the built-in electric field at the interface. Based on the benefits like simple structure, ease of fabrication and response ratio the studied device is a promising candidate for gas detection applications.
Laser Ignition Microthruster Experiments on KKS-1
NASA Astrophysics Data System (ADS)
Nakano, Masakatsu; Koizumi, Hiroyuki; Watanabe, Masashi; Arakawa, Yoshihiro
A laser ignition microthruster has been developed for microsatellites. Thruster performances such as impulse and ignition probability were measured, using boron potassium nitrate (B/KNO3) solid propellant ignited by a 1 W CW laser diode. The measured impulses were 60 mNs ± 15 mNs with almost 100 % ignition probability. The effect of the mixture ratios of B/KNO3 on thruster performance was also investigated, and it was shown that mixture ratios between B/KNO3/binder = 28/70/2 and 38/60/2 exhibited both high ignition probability and high impulse. Laser ignition thrusters designed and fabricated based on these data became the first non-conventional microthrusters on the Kouku Kousen Satellite No. 1 (KKS-1) microsatellite that was launched by a H2A rocket as one of six piggyback satellites in January 2009.
Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko
2012-01-01
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.
HIV Infection and Survival Among Women With Cervical Cancer
Bvochora-Nsingo, Memory; Suneja, Gita; Efstathiou, Jason A.; Grover, Surbhi; Chiyapo, Sebathu; Ramogola-Masire, Doreen; Kebabonye-Pusoentsi, Malebogo; Clayman, Rebecca; Mapes, Abigail C.; Tapela, Neo; Asmelash, Aida; Medhin, Heluf; Viswanathan, Akila N.; Russell, Anthony H.; Lin, Lilie L.; Kayembe, Mukendi K.A.; Mmalane, Mompati; Randall, Thomas C.; Chabner, Bruce; Lockman, Shahin
2016-01-01
Purpose Cervical cancer is the leading cause of cancer death among the 20 million women with HIV worldwide. We sought to determine whether HIV infection affected survival in women with invasive cervical cancer. Patients and Methods We enrolled sequential patients with cervical cancer in Botswana from 2010 to 2015. Standard treatment included external beam radiation and brachytherapy with concurrent cisplatin chemotherapy. The effect of HIV on survival was estimated by using an inverse probability weighted marginal Cox model. Results A total of 348 women with cervical cancer were enrolled, including 231 (66.4%) with HIV and 96 (27.6%) without HIV. The majority (189 [81.8%]) of women with HIV received antiretroviral therapy before cancer diagnosis. The median CD4 cell count for women with HIV was 397 (interquartile range, 264 to 555). After a median follow-up of 19.7 months, 117 (50.7%) women with HIV and 40 (41.7%) without HIV died. One death was attributed to HIV and the remaining to cancer. Three-year survival for the women with HIV was 35% (95% CI, 27% to 44%) and 48% (95% CI, 35% to 60%) for those without HIV. In an adjusted analysis, HIV infection significantly increased the risk for death among all women (hazard ratio, 1.95; 95% CI, 1.20 to 3.17) and in the subset that received guideline-concordant curative treatment (hazard ratio, 2.63; 95% CI, 1.05 to 6.55). The adverse effect of HIV on survival was greater for women with a more-limited stage cancer (P = .035), those treated with curative intent (P = .003), and those with a lower CD4 cell count (P = .036). Advanced stage and poor treatment completion contributed to high mortality overall. Conclusion In the context of good access to and use of antiretroviral treatment in Botswana, HIV infection significantly decreases cervical cancer survival. PMID:27573661
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç
2011-03-01
The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Saito, Tatsuhiko; Shiomi, Katsuhiko
2017-03-01
An M 6.8 ( Mw 6.5) deep-focus earthquake occurred beneath the Bonin Islands at 21:18 (JST) on June 23, 2015. Observed high-frequency (>1 Hz) seismograms across Japan, which contain several sets of P- and S-wave arrivals for the 10 min after the origin time, indicate that moderate-to-large earthquakes occurred sequentially around Japan. Snapshots of the seismic energy propagation illustrate that after one deep-focus earthquake occurred beneath the Sea of Japan, two deep-focus earthquakes occurred sequentially after the first ( Mw 6.5) event beneath the Bonin Islands in the next 4 min. The United States Geological Survey catalog includes three Bonin deep-focus earthquakes with similar hypocenter locations, but their estimated magnitudes are inconsistent with seismograms from across Japan. The maximum-amplitude patterns of the latter two earthquakes were similar to that of the first Bonin earthquake, which indicates similar locations and mechanisms. Furthermore, based on the ratios of the S-wave amplitudes to that of the first event, the magnitudes of the latter events are estimated as M 6.5 ± 0.02 and M 5.8 ± 0.02, respectively. Three magnitude-6-class earthquakes occurred sequentially within 4 min in the Pacific slab at 480 km depth, where complex heterogeneities exist within the slab.[Figure not available: see fulltext.
Galach, Magda; Antosiewicz, Stefan; Baczynski, Daniel; Wankowicz, Zofia; Waniewski, Jacek
2013-02-01
In spite of many peritoneal tests proposed, there is still a need for a simple and reliable new approach for deriving detailed information about peritoneal membrane characteristics, especially those related to fluid transport. The sequential peritoneal equilibration test (sPET) that includes PET (glucose 2.27%, 4 h) followed by miniPET (glucose 3.86%, 1 h) was performed in 27 stable continuous ambulatory peritoneal dialysis patients. Ultrafiltration volumes, glucose absorption, ratio of concentration in dialysis fluid to concentration in plasma (D/P), sodium dip (Dip D/P Sodium), free water fraction (FWF60) and the ultrafiltration passing through small pores at 60 min (UFSP60), were calculated using clinical data. Peritoneal transport parameters were estimated using the three-pore model (3p model) and clinical data. Osmotic conductance for glucose was calculated from the parameters of the model. D/P creatinine correlated with diffusive mass transport parameters for all considered solutes, but not with fluid transport characteristics. Hydraulic permeability (L(p)S) correlated with net ultrafiltration from miniPET, UFSP60, FWF60 and sodium dip. The fraction of ultrasmall pores correlated with FWF60 and sodium dip. The sequential PET described and interpreted mechanisms of ultrafiltration and solute transport. Fluid transport parameters from the 3p model were independent of the PET D/P creatinine, but correlated with fluid transport characteristics from PET and miniPET.
Ganesh, J S; Rogers, C A; Bonser, R S; Banner, N R
2005-06-01
Cystic fibrosis (CF) patients requiring transplantation for respiratory failure may undergo either heart-lung (HLT) or bilateral sequential lung (BSLT) transplantation. The choice of operation varies between surgeons, centres and countries. The current authors investigated whether operation type influenced outcome in adult CF patients transplanted in the UK between July 1995 and June 2002. Propensity scores for receipt of BSLT versus HLT were derived using logistic regression. Cox regression was used to compare survival. In total, 88 BSLTs and 93 HLTs were identified. Patient characteristics were similar overall, but HLT recipients were more likely to be on long-term oxygen therapy and to have had prior resuscitation. There were 72 deaths (29 BSLT and 43 HLT) within 4 yrs. There was a trend towards higher unadjusted survival following BSLT, but, after adjustment, no difference was found (hazard ratio = 0.77; 95% confidence interval 0.29-2.06). Time to the first rejection episode and infection rates were also similar. A total of 82% of hearts from HLT recipients were used as domino heart transplants. In conclusion, after adjusting for comorbidity, donor factors and ischaemia time, it was found that heart-lung and bilateral sequential lung transplantation achieved a similar outcome. The use of domino heart transplantation ameliorated the impact of heart-lung transplantation on total organ availability.
Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.
Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria
2009-02-01
A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.