Sample records for probability measures fpm

  1. Modeling highway travel time distribution with conditional probability models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less

  2. The Anatomy of American Football: Evidence from 7 Years of NFL Game Data

    PubMed Central

    Papalexakis, Evangelos

    2016-01-01

    How much does a fumble affect the probability of winning an American football game? How balanced should your offense be in order to increase the probability of winning by 10%? These are questions for which the coaching staff of National Football League teams have a clear qualitative answer. Turnovers are costly; turn the ball over several times and you will certainly lose. Nevertheless, what does “several” mean? How “certain” is certainly? In this study, we collected play-by-play data from the past 7 NFL seasons, i.e., 2009–2015, and we build a descriptive model for the probability of winning a game. Despite the fact that our model incorporates simple box score statistics, such as total offensive yards, number of turnovers etc., its overall cross-validation accuracy is 84%. Furthermore, we combine this descriptive model with a statistical bootstrap module to build FPM (short for Football Prediction Matchup) for predicting future match-ups. The contribution of FPM is pertinent to its simplicity and transparency, which however does not sacrifice the system’s performance. In particular, our evaluations indicate that our prediction engine performs on par with the current state-of-the-art systems (e.g., ESPN’s FPI and Microsoft’s Cortana). The latter are typically proprietary but based on their components described publicly they are significantly more complicated than FPM. Moreover, their proprietary nature does not allow for a head-to-head comparison in terms of the core elements of the systems but it should be evident that the features incorporated in FPM are able to capture a large percentage of the observed variance in NFL games. PMID:28005971

  3. High-speed Fourier ptychographic microscopy based on programmable annular illuminations.

    PubMed

    Sun, Jiasong; Zuo, Chao; Zhang, Jialin; Fan, Yao; Chen, Qian

    2018-05-16

    High-throughput quantitative phase imaging (QPI) is essential to cellular phenotypes characterization as it allows high-content cell analysis and avoids adverse effects of staining reagents on cellular viability and cell signaling. Among different approaches, Fourier ptychographic microscopy (FPM) is probably the most promising technique to realize high-throughput QPI by synthesizing a wide-field, high-resolution complex image from multiple angle-variably illuminated, low-resolution images. However, the large dataset requirement in conventional FPM significantly limits its imaging speed, resulting in low temporal throughput. Moreover, the underlying theoretical mechanism as well as optimum illumination scheme for high-accuracy phase imaging in FPM remains unclear. Herein, we report a high-speed FPM technique based on programmable annular illuminations (AIFPM). The optical-transfer-function (OTF) analysis of FPM reveals that the low-frequency phase information can only be correctly recovered if the LEDs are precisely located at the edge of the objective numerical aperture (NA) in the frequency space. By using only 4 low-resolution images corresponding to 4 tilted illuminations matching a 10×, 0.4 NA objective, we present the high-speed imaging results of in vitro Hela cells mitosis and apoptosis at a frame rate of 25 Hz with a full-pitch resolution of 655 nm at a wavelength of 525 nm (effective NA = 0.8) across a wide field-of-view (FOV) of 1.77 mm 2 , corresponding to a space-bandwidth-time product of 411 megapixels per second. Our work reveals an important capability of FPM towards high-speed high-throughput imaging of in vitro live cells, achieving video-rate QPI performance across a wide range of scales, both spatial and temporal.

  4. Ventolin Diskus and Inspyril Turbuhaler: an in vitro comparison.

    PubMed

    Broeders, M E A C; Molema, J; Burnell, P K P; Folgering, H T M

    2005-01-01

    Dose delivery (total emitted dose, or TED) from dry powder inhalers (DPIs), pulmonary deposition, and the biological effects depend on drug formulation and device and patient characteristics. The aim of this study was to measure, in vitro, the relationship between parameters of inhalation profiles recorded from patients, the TED and fine particle mass (FPM) of Diskus and Turbuhaler inhalers. Inhalation profiles (IPs) of 25 patients, a representative sample of a wide range of 1500 IPs generated by 10 stable asthmatics, 3 x 16 (mild/moderate/severe) COPD patients and 15 hospitalized patients with an exacerbation asthma or COPD, were selected for each device. These 25 IPs were input IPs for the Electronic Lung (a computerdriven inhalation simulator) to determine particle size distribution from Ventolin Diskus and Inspyril Turbuhaler. The TED and FPM of Diskus and FPM of Turbuhaler were affected by the peak inspiratory flow (PIF) and not by slope of the pressure-time curve, inhaled volume and inhalation time. This flow-dependency was more marked at lower flows (PIF < 40 L/min). Both the TED and FPM of Diskus were significantly higher as compared to those of the Turbuhaler [mean (SD) TED(_diskus) (%label claim) 83.5 (13.9) vs. TED(_turbuhaler) (72.5 (11.1) (p = 0.004), FPM(_diskus) (%label claim) 36.8 (9.8) vs FPM(_turbuhaler) (28.7 (7.7) (p < 0.05)]. The TED and FPM of Diskus and FPM of Turbuhaler were affected by PIF, the flow-dependency being greater at PIF values below 40 L/min. Lower PIFs occurred more often when using Turbuhaler than Diskus, since Turbuhaler have a higher resistivity, requires substantially higher pressure in order to generate the same flow as Diskus. TED, dose consistency and the FPM were higher for Diskus as compared to Turbuhaler. The flow dependency of TED and FPM was substantially influenced by inhalation profiles when not only profiles of the usual outpatient population were included but also the real outliers from exacerbated patients.

  5. Three methods of presenting flight vector information in a head-up display during simulated STOL approaches

    NASA Technical Reports Server (NTRS)

    Dwyer, J. H., III; Palmer, E. A., III

    1975-01-01

    A simulator study was conducted to determine the usefulness of adding flight path vector symbology to a head-up display designed to improve glide-slope tracking performance during steep 7.5 deg visual approaches in STOL aircraft. All displays included a fixed attitude symbol, a pitch- and roll-stabilized horizon bar, and a glide-slope reference bar parallel to and 7.5 deg below the horizon bar. The displays differed with respect to the flight-path marker (FPM) symbol: display 1 had no FPM symbol; display 2 had an air-referenced FPM, and display 3 had a ground-referenced FPM. No differences between displays 1 and 2 were found on any of the performance measures. Display 3 was found to decrease height error in the early part of the approach and to reduce descent rate variation over the entire approach. Two measures of workload did not indicate any differences between the displays.

  6. Cardiovascular Effects of Nickel in Ambient Air

    PubMed Central

    Lippmann, Morton; Ito, Kazuhiko; Hwang, Jing-Shiang; Maciejczyk, Polina; Chen, Lung-Chi

    2006-01-01

    Background Fine particulate matter (FPM) in ambient air causes premature mortality due to cardiac disease in susceptible populations. Objective Our objective in this study was to determine the most influential FPM components. Methods A mouse model of atherosclerosis (ApoE−/−) was exposed to either filtered air or concentrated FPM (CAPs) in Tuxedo, New York (85 μg/m3 average, 6 hr/day, 5 days/week, for 6 months), and the FPM elemental composition was determined for each day. We also examined associations between PM components and mortality for two population studies: National Mortality and Morbidity Air Pollution Study (NMMAPS) and Hong Kong. Results For the CAPs-exposed mice, the average of nickel was 43 ng/m3, but on 14 days, there were Ni peaks at ~ 175 ng/m3 and unusually low FPM and vanadium. For those days, back-trajectory analyses identified a remote Ni point source. Electrocardiographic measurements on CAPs-exposed and sham-exposed mice showed Ni to be significantly associated with acute changes in heart rate and its variability. In NMMAPS, daily mortality rates in the 60 cities with recent speciation data were significantly associated with average Ni and V, but not with other measured species. Also, the Hong Kong sulfur intervention produced sharp drops in sulfur dioxide, Ni, and V, but not other components, corresponding to the intervention-related reduction in cardiovascular and pulmonary mortality. Conclusions Known biological mechanisms cannot account for the significant associations between Ni with the acute cardiac function changes in the mice or with cardiovascular mortality in people at low ambient air concentrations; therefore, further research is needed. PMID:17107850

  7. Effects of illumination on image reconstruction via Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Cao, Xinrui; Sinzinger, Stefan

    2017-12-01

    The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.

  8. Flexible Pre-Majors: Final Report of the Flexible Pre-Majors Working Group

    ERIC Educational Resources Information Center

    FitzGibbon, John; Orum, Jennifer

    2011-01-01

    This report provides advice for program areas contemplating the development of a Flexible Pre-Major (FPM) in their discipline. The FPM is another means of aiding student transfer in a system that expects and encourages significant student mobility. The FPM addresses a problematic area for academic students: that of completing the lower level major…

  9. Feasible pickup from intact ossicular chain with floating piezoelectric microphone.

    PubMed

    Kang, Hou-Yong; Na, Gao; Chi, Fang-Lu; Jin, Kai; Pan, Tie-Zheng; Gao, Zhen

    2012-02-22

    Many microphones have been developed to meet with the implantable requirement of totally implantable cochlear implant (TICI). However, a biocompatible one without destroying the intactness of the ossicular chain still remains under investigation. Such an implantable floating piezoelectric microphone (FPM) has been manufactured and shows an efficient electroacoustic performance in vitro test at our lab. We examined whether it pick up sensitively from the intact ossicular chain and postulated whether it be an optimal implantable one. Animal controlled experiment: five adult cats (eight ears) were sacrificed as the model to test the electroacoustic performance of the FPM. Three groups were studied: (1) the experiment group (on malleus): the FPM glued onto the handle of the malleus of the intact ossicular chains; (2) negative control group (in vivo): the FPM only hung into the tympanic cavity; (3) positive control group (Hy-M30): a HiFi commercial microphone placed close to the site of the experiment ear. The testing speaker played pure tones orderly ranged from 0.25 to 8.0 kHz. The FPM inside the ear and the HiFi microphone simultaneously picked up acoustic vibration which recorded as .wav files to analyze. The FPM transformed acoustic vibration sensitively and flatly as did the in vitro test across the frequencies above 2.0 kHz, whereas inefficiently below 1.0 kHz for its overloading mass. Although the HiFi microphone presented more efficiently than the FPM did, there was no significant difference at 3.0 kHz and 8.0 kHz. It is feasible to develop such an implantable FPM for future TICIs and TIHAs system on condition that the improvement of Micro Electromechanical System and piezoelectric ceramic material technology would be applied to reduce its weight and minimize its size.

  10. Fluorinated phenmetrazine "legal highs" act as substrates for high-affinity monoamine transporters of the SLC6 family.

    PubMed

    Mayer, Felix P; Burchardt, Nadine V; Decker, Ann M; Partilla, John S; Li, Yang; McLaughlin, Gavin; Kavanagh, Pierce V; Sandtner, Walter; Blough, Bruce E; Brandt, Simon D; Baumann, Michael H; Sitte, Harald H

    2018-05-15

    A variety of new psychoactive substances (NPS) are appearing in recreational drug markets worldwide. NPS are compounds that target various receptors and transporters in the central nervous system to achieve their psychoactive effects. Chemical modifications of existing drugs can generate NPS that are not controlled by current legislation, thereby providing legal alternatives to controlled substances such as cocaine or amphetamine. Recently, 3-fluorophenmetrazine (3-FPM), a derivative of the anorectic compound phenmetrazine, appeared on the recreational drug market and adverse clinical effects have been reported. Phenmetrazine is known to elevate extracellular monoamine concentrations by an amphetamine-like mechanism. Here we tested 3-FPM and its positional isomers, 2-FPM and 4-FPM, for their abilities to interact with plasma membrane monoamine transporters for dopamine (DAT), norepinephrine (NET) and serotonin (SERT). We found that 2-, 3- and 4-FPM inhibit uptake mediated by DAT and NET in HEK293 cells with potencies comparable to cocaine (IC 50 values < 2.5 μM), but display less potent effects at SERT (IC 50 values >80 μM). Experiments directed at identifying transporter-mediated reverse transport revealed that FPM isomers induce efflux via DAT, NET and SERT in HEK293 cells, and this effect is augmented by the Na + /H + ionophore monensin. Each FPM evoked concentration-dependent release of monoamines from rat brain synaptosomes. Hence, this study reports for the first time the mode of action for 2-, 3- and 4-FPM and identifies these NPS as monoamine releasers with marked potency at catecholamine transporters implicated in abuse and addiction. This article is part of the Special Issue entitled 'Designer Drugs and Legal Highs.' Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Flux estimation of fugitive particulate matter emissions from loose Calcisols at construction sites

    NASA Astrophysics Data System (ADS)

    Hassan, Hala A.; Kumar, Prashant; Kakosimos, Konstantinos E.

    2016-09-01

    A major source of airborne pollution in arid and semi-arid environments (i.e. North Africa, Middle East, Central Asia, and Australia) is the fugitive particulate matter (fPM), which is a frequent product of wind erosion. However, accurate determination of fPM is an ongoing scientific challenge. The objective of this study is to examine fPM emissions from the loose Calcisols (i.e. soils with a substantial accumulation of secondary carbonates), owing to construction activities that can be frequently seen nowadays in arid urbanizing regions such as the Middle East. A two months field campaign was conducted at a construction site, at rest, within the city of Doha (Qatar) to measure number concentrations of PM over a size range of 0.25-32 μm using light scattering based monitoring stations. The fPM emission fluxes were calculated using the Fugitive Dust Model (FDM) in an iterative manner and were fitted to a power function, which expresses the wind velocity dependence. The power factors were estimated as 1.87, 1.65, 2.70 and 2.06 for the four different size classes of particles ≤2.5, 2.5-6, 6-10 and ≤10 μm, respectively. Fitted power function was considered acceptable given that adjusted R2 values varied from 0.13 for the smaller particles and up to 0.69 for the larger ones. These power factors are in the same range of those reported in the literature for similar sources. The outcome of this study is expected to contribute to the improvement of PM emission inventories by focusing on an overlooked but significant pollution source, especially in dry and arid regions, and often located very close to residential areas and sensitive population groups. Further campaigns are recommended to reduce the uncertainty and include more fPM sources (e.g. earthworks) and other types of soil.

  12. Imaging photonic crystals using hemispherical digital condensers and phase-recovery techniques.

    PubMed

    Alotaibi, Maged; Skinner-Ramos, Sueli; Farooq, Hira; Alharbi, Nouf; Alghasham, Hawra; de Peralta, Luis Grave

    2018-05-10

    We describe experiments where Fourier ptychographic microscopy (FPM) and dual-space microscopy (DSM) are implemented for imaging photonic crystals using a hemispherical digital condenser (HDC). Phase-recovery imaging simulations show that both techniques should be able to image photonic crystals with a period below the Rayleigh resolution limit. However, after processing the experimental images using both phase-recovery algorithms, we found that DSM can, but FPM cannot, image periodic structures with a period below the diffraction limit. We studied the origin of this apparent contradiction between simulations and experiments, and we concluded that the occurrence of unwanted reflections in the HDC is the source of the apparent failure of FPM. We thereafter solved the problem of reflections by using a single-directional illumination source and showed that FPM can image photonic crystals with a period below the Rayleigh resolution limit.

  13. Use of environmental tobacco smoke constituents as markers for exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaKind, J.S.; Jenkins, R.A.; Naiman, D.Q.

    1999-06-01

    The 16-City Study analyzed for gas-phase environmental tobacco smoke (ETS) constituents (nicotine, 3-ethenyl pyridine [3-EP], and myosmine) and for particulate-phase constituents (respirable particulate matter [RSP], ultraviolet-absorbing particulate matter [UVPM], fluorescing particulate matter [FPM], scopoletin, and solanesol). In this second of three articles, the authors discuss the merits of each constituent as a marker for ETS and report pair-wise comparisons of the markers. Neither nicotine nor UVPM were good predictors for RSP. However, nicotine and UVPM were good qualitative predictors of each other. Nicotine was correlated with other gas-phase constituents. Comparisons between UVPM and other particulate-phase constituents were performed. Its relationmore » with FPM was excellent, with UVPM approximately 1 1/2 times FPM. The correlation between UVPM and solanesol was good, but the relationship between the two was not linear. The relation between UVPM and scopoletin was not good, largely because of noise in the scopoletin measures around its limit of detection. The authors considered the relation between nicotine and saliva cotinine, a metabolite of nicotine. The two were highly correlated on the group level.« less

  14. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  15. Disentangling multiple pressures on fish assemblages in large rivers.

    PubMed

    Zajicek, Petr; Radinger, Johannes; Wolter, Christian

    2018-06-15

    European large rivers are exposed to multiple human pressures and maintained as waterways for inland navigation. However, little is known on the dominance and interactions of multiple pressures in large rivers and in particular inland navigation has been ignored in multi-pressure analyzes so far. We determined the response of ten fish population metrics (FPM, related to densities of diagnostic guilds and biodiversity) to 11 prevailing pressures including navigation intensity at 76 sites in eight European large rivers. Thereby, we aimed to derive indicative FPM for the most influential pressures that can serve for fish-based assessments. Pressures' influences, impacts and interactions were determined for each FPM using bootstrapped regression tree models. Increased flow velocity, navigation intensity and the loss of floodplains had the highest influences on guild densities and biodiversity. Interactions between navigation intensity and loss of floodplains and between navigation intensity and increased flow velocity were most frequent, each affecting 80% of the FPM. Further, increased sedimentation, channelization, organic siltation, the presence of artificial embankments and the presence of barriers had strong influences on at least one FPM. Thereby, each FPM was influenced by up to five pressures. However, some diagnostic FPM could be derived: Species richness, Shannon and Simpson Indices, the Fish Region Index and lithophilic and psammophilic guilds specifically indicate rhithralisation of the potamal region of large rivers. Lithophilic, phytophilic and psammophilic guilds indicate disturbance of shoreline habitats through both (i) wave action induced by passing vessels and (ii) hydromorphological degradation of the river channel that comes along with inland navigation. In European large rivers, inland navigation constitutes a highly influential pressure that adds on top of the prevailing hydromorphological degradation. Therefore, river management has to consider river hydromorphology and inland navigation to efficiently rehabilitate the potamal region of large rives. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. PAHs concentration and toxicity in organic solvent extracts of atmospheric particulate matter and sea sediments.

    PubMed

    Ozaki, Noriatsu; Takeuchi, Shin-ya; Kojima, Keisuke; Kindaichi, Tomonori; Komatsu, Toshiko; Fukushima, Takehiko

    2012-01-01

    The concentration of polycyclic aromatic hydrocarbons (PAHs) and the toxicity to marine bacteria (Vibrio fischeri) were measured for the organic solvent extracts of sea sediments collected from an urban watershed area (Hiroshima Bay) of Japan and compared with the concentrations and toxicity of atmospheric particulate matter (PM). In atmospheric PM, the PAHs concentration was highest in fine particulate matter (FPM) collected during cold seasons. The concentrations of sea sediments were 0.01-0.001 times those of atmospheric PM. 1/EC50 was 1-10 L g(-1) PM for atmospheric PM and 0.1-1 L g(-1) dry solids for sea sediments. These results imply that toxic substances from atmospheric PM are diluted several tens or hundreds of times in sea sediments. The ratio of the 1/EC50 to PAHs concentration ((1/EC50)/16PAHs) was stable for all sea sediments (0.1-1 L μg(-1) 16PAHs) and was the same order of magnitude as that of FPM and coarse particulate matter (CPM). The ratio of sediments collected from the west was more similar to that of CPM while that from the east was more similar to FPM, possibly because of hydraulic differences among water bodies. The PAHs concentration pattern analyses (principal component analysis and isomer ratio analysis) were conducted and the results showed that the PAHs pattern in sea sediments was quite different to that of FPM and CPM. Comparison with previously conducted PAHs analyses suggested that biomass burning residues comprised a major portion of these other sources.

  17. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Developing technology -- a forest health partnership

    Treesearch

    John W. Barry; Harold W. Thistle

    1995-01-01

    Since the early 1960's Missoula Technology and Development Center (MTDC) and Forest Pest Management (FPM) have worked in partnership developing technology to support forest health and silviculture. Traditionally this partnership has included cooperators from other agencies, States, foreign governments, academia, industry, and individual landowners. The FPM...

  19. Liquid and Frozen Storage of Agouti (Dasyprocta leporina) Semen Extended with UHT Milk, Unpasteurized Coconut Water, and Pasteurized Coconut Water

    PubMed Central

    Mollineau, W. M.; Adogwa, A. O.; Garcia, G. W.

    2011-01-01

    This study evaluated the effects of semen extension and storage on forward progressive motility % (FPM%) in agouti semen. Three extenders were used; sterilized whole cow's milk (UHT Milk), unpasteurized (CW) and pasteurized coconut water (PCW), and diluted to 50, 100, 150, and 200 × 106 spermatozoa/ml. Experiment 1: 200 ejaculates were extended for liquid storage at 5∘C and evaluated every day for 5 days to determine FPM% and its rate of deterioration. Experiment 2: 150 ejaculates were extended for storage as frozen pellets in liquid nitrogen at −195∘C, thawed at 30∘ to 70∘C for 20 to 50 seconds after 5 days and evaluated for FPM% and its rate of deterioration. Samples treated with UHT milk and storage at concentrations of 100 × 106 spermatozoa/ml produced the highest means for FPM% and the slowest rates of deterioration during Experiment 1. During Experiment 2 samples thawed at 30∘C for 20 seconds exhibited the highest means for FPM% (12.18 ± 1.33%), 85% rate of deterioration. However, samples were incompletely thawed. This was attributed to the diameter of the frozen pellets which was 1 cm. It was concluded that the liquid storage method was better for short term storage. PMID:20871831

  20. The first permanent molar: spontaneous eruption after a five-year failure.

    PubMed

    Mistry, Vinay N; Barker, Christopher S; James Spencer, R

    2017-09-01

    It is rare for a first permanent molar (FPM) to temporarily exhibit clinical features of failure of eruption, followed by regeneration of full eruptive capacity 5 years later. Indeterminate failure of eruption (IFE) is a diagnosis of exclusion where the distinction between primary failure of eruption (PFE) and mechanical failure of eruption (MFE) is unclear, including patients too young to specify. An 11-year-old girl attended the orthodontic clinic at Mid Yorkshire Hospitals NHS Trust regarding an unerupted lower right FPM. Her medical and dental trauma history was unremarkable. She presented with a Class II division 2 malocclusion in the mixed dentition, with all other FPMs fully erupted. This report documents that an unerupted FPM in an 11-year-old patient may still have the eruptive potential to become functional within the dentition. The period spent monitoring the FPM's outcome prior to surgical intervention has avoided an operation under general anaesthetic and potentially unnecessary orthodontic treatment, as the tooth subsequently erupted without treatment. © 2017 BSPD, IAPD and John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy

    PubMed Central

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2015-01-01

    We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems. PMID:26480361

  2. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy.

    PubMed

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R; Yaqoob, Zahid; So, Peter T C

    2015-10-19

    We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems.

  3. A positional misalignment correction method for Fourier ptychographic microscopy based on simulated annealing

    NASA Astrophysics Data System (ADS)

    Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao

    2017-02-01

    Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.

  4. Fourier ptychographic microscopy at telecommunication wavelengths using a femtosecond laser

    NASA Astrophysics Data System (ADS)

    Ahmed, Ishtiaque; Alotaibi, Maged; Skinner-Ramos, Sueli; Dominguez, Daniel; Bernussi, Ayrton A.; de Peralta, Luis Grave

    2017-12-01

    We report the implementation of the Fourier Ptychographic Microscopy (FPM) technique, a phase retrieval technique, at telecommunication wavelengths using a low-coherence ultrafast pulsed laser source. High quality images, near speckle-free, were obtained with the proposed approach. We demonstrate that FPM can also be used to image periodic features through a silicon wafer.

  5. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  6. Alternative models in genetic analyses of carcass traits measured by ultrasonography in Guzerá cattle: A Bayesian approach

    USDA-ARS?s Scientific Manuscript database

    The objective was to study alternative models for genetic analyses of carcass traits assessed by ultrasonography in Guzerá cattle. Data from 947 measurements (655 animals) of Rib-eye area (REA), rump fat thickness (RFT) and backfat thickness (BFT) were used. Finite polygenic models (FPM), infinitesi...

  7. Quantity of 135I released from the AGR-1, AGR-2, and AGR-3/4 experiments and discovery of 131I at the FPMS traps during the AGR-3/4 experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scates, Dawn M.

    2014-09-01

    A series of three Advanced Gas Reactor (AGR) experiments have been conducted in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). From 2006 through 2014, these experiments supported the development and qualification of the new U.S. tristructural isotropic (TRISO) particle fuel for Very High Temperature Reactors (VHTR). Each AGR experiment consisted of multiple fueled capsules, each plumbed for independent temperature control using a mix of helium and neon gases. The gas leaving a capsule was routed to individual Fission Product Monitor (FPM) detectors. For intact fuel particles, the TRISO particle coatings provide a substantial barrier to fission productmore » release. However, particles with failed coatings, whether because of a minute percentage of initially defective particles, those which fail during irradiation, or those designed to fail (DTF) particles, can release fission products to the flowing gas stream. Because reactive fission product elements like iodine and cesium quickly deposit on cooler capsule components and piping structures as the effluent gas leaves the reactor core, only the noble fission gas isotopes of Kr and Xe tend to reach FPM detectors. The FPM system utilizes High Purity Germanium (HPGe) detectors coupled with a thallium activated sodium iodide NaI(Tl) scintillator. The HPGe detector provides individual isotopic information, while the NaI(Tl) scintillator is used as a gross count rate meter. During irradiation, the 135mXe concentration reaching the FPM detectors is from both direct fission and by decay of the accumulated 135I. About 2.5 hours after irradiation (ten 15.3 minute 135mXe half lives) the directly produced 135mXe has decayed and only the longer lived 135I remains as a source. Decay systematics dictate that 135mXe will be in secular equilibrium with its 135I parent, such that its production rate very nearly equals the decay rate of the parent, and its concentration in the flowing gas stream will appear to decay with the parent half life. This equilibrium condition enables the determination of the amount of 135I released from the fuel particles by measurement of the 135mXe at the FPM following reactor shutdown. In this paper, the 135I released will be reported and compared to similar releases for noble gases as well as the unexpected finding of 131I deposition from intentional impure gas injection into capsule 11 of experiment AGR 3/4.« less

  8. Optimization of FPM system in Barsukovskoye deposit with hydrodynamic modeling and analysis of inter-well interaction

    NASA Astrophysics Data System (ADS)

    Almukhametova, E. M.; Gizetdinov, I. A.

    2018-05-01

    Development of most deposits in Russia is accompanied with a high level of crude water cut. More than 70% of the operating well count of Barsukovskoye deposit operates with water; about 12% of the wells are characterized by a saturated water cut; many wells with high water cut are idling. To optimize the current FPM system of the Barsukovskoye deposit, a calculation method over a hydrodynamic model was applied with further analysis of hydrodynamic connectivity between the wells. A plot was selected, containing several wells with water cut going ahead of reserve recovery rate; injection wells, exerting the most influence onto the selected producer wells, were determined. Then, several variants were considered for transformation of the FPM system of this plot. The possible cases were analyzed with the hydrodynamic model with further determination of economic effect of each of them.

  9. Finite state model and compatibility theory - New analysis tools for permutation networks

    NASA Technical Reports Server (NTRS)

    Huang, S.-T.; Tripathi, S. K.

    1986-01-01

    A simple model to describe the fundamental operation theory of shuffle-exchange-type permutation networks, the finite permutation machine (FPM), is described, and theorems which transform the control matrix result to a continuous compatible vector result are developed. It is found that only 2n-1 shuffle exchange passes are necessary, and that 3n-3 passes are sufficient, to realize all permutations, reducing the sufficient number of passes by two from previous results. The flexibility of the approach is demonstrated by the description of a stack permutation machine (SPM) which can realize all permutations, and by showing that the FPM corresponding to the Benes (1965) network belongs to the SPM. The FPM corresponding to the network with two cascaded reverse-exchange networks is found to realize all permutations, and a simple mechanism to verify several equivalence relationships of various permutation networks is discussed.

  10. Comparison of Measured and Predicted Bioconcentration Estimates of Pharmaceuticals in Fish Plasma and Prediction of Chronic Risk.

    PubMed

    Nallani, Gopinath; Venables, Barney; Constantine, Lisa; Huggett, Duane

    2016-05-01

    Evaluation of the environmental risk of human pharmaceuticals is now a mandatory component in all new drug applications submitted for approval in EU. With >3000 drugs currently in use, it is not feasible to test each active ingredient, so prioritization is key. A recent review has listed nine prioritization approaches including the fish plasma model (FPM). The present paper focuses on comparison of measured and predicted fish plasma bioconcentration factors (BCFs) of four common over-the-counter/prescribed pharmaceuticals: norethindrone (NET), ibuprofen (IBU), verapamil (VER) and clozapine (CLZ). The measured data were obtained from the earlier published fish BCF studies. The measured BCF estimates of NET, IBU, VER and CLZ were 13.4, 1.4, 0.7 and 31.2, while the corresponding predicted BCFs (based log Kow at pH 7) were 19, 1.0, 7.6 and 30, respectively. These results indicate that the predicted BCFs matched well the measured values. The BCF estimates were used to calculate the human: fish plasma concentration ratios of each drug to predict potential risk to fish. The plasma ratio results show the following order of risk potential for fish: NET > CLZ > VER > IBU. The FPM has value in prioritizing pharmaceutical products for ecotoxicological assessments.

  11. Performance evaluation of mobile downflow booths for reducing airborne particles in the workplace.

    PubMed

    Lo, Li-Ming; Hocker, Braden; Steltz, Austin E; Kremer, John; Feng, H Amy

    2017-11-01

    Compared to other common control measures, the downflow booth is a costly engineering control used to contain airborne dust or particles. The downflow booth provides unidirectional filtered airflow from the ceiling, entraining released particles away from the workers' breathing zone, and delivers contained airflow to a lower level exhaust for removing particulates by filtering media. In this study, we designed and built a mobile downflow booth that is capable of quick assembly and easy size change to provide greater flexibility and particle control for various manufacturing processes or tasks. An experimental study was conducted to thoroughly evaluate the control performance of downflow booths used for removing airborne particles generated by the transfer of powdered lactose between two containers. Statistical analysis compared particle reduction ratios obtained from various test conditions including booth size (short, regular, or extended), supply air velocity (0.41 and 0.51 m/s or 80 and 100 feet per minute, fpm), powder transfer location (near or far from the booth exhaust), and inclusion or exclusion of curtains at the booth entrance. Our study results show that only short-depth downflow booths failed to protect the worker performing powder transfer far from the booth exhausts. Statistical analysis shows that better control performance can be obtained with supply air velocity of 0.51 m/s (100 fpm) than with 0.41 m/s (80 fpm) and that use of curtains for downflow booths did not improve their control performance.

  12. Documentation for the USAF School of Aerospace Medicine Altitude Decompression Sickness Research Database

    DTIC Science & Technology

    2010-05-01

    following investigators have been listed on protocols where DCS and/or VGE were the primary data gathered (omits PRK and LASIK ): Jimmy D. Adams, PhD...there was a difference in the level of upper vs . lower-body joint pain which was evident statistically when many non-ambulatory vs . ambulatory studies...5,000 fpm vs . 80,000 fpm ascents to 40,000 ft (90-min prebreathe, 90-min exposure), there were a few more neurologic and respiratory symptoms

  13. Effect of grinding conditions on the fatigue life of titanium 5Al-2.5Sn alloy

    NASA Technical Reports Server (NTRS)

    Rangaswamy, P.; Terutung, H.; Jeelani, S.

    1991-01-01

    An investigation into the effect of grinding conditions on the fatigue life of titanium 5Al-2.5Sn is presented. Damage to surface integrity and changes in the residual stresses distribution are studied to assess changes in fatigue life. A surface grinding machine, operating at speeds ranging from 2000 to 6000 fpm and using SiC wheels of grit sizes 60 and 120, was used to grind flat subsize specimens of 0.1-in. thickness. After grinding, the specimens were fatigued at a chosen stress and compared with the unadulterated material. A standard profilometer, a microhardness tester, and a scanning electron microscope were utilized to examine surface characteristics and measure roughness and hardness. Increased grinding speed in both wet and dry applications tended to decrease the fatigue life of the specimens. Fatigue life increased markedly at 2000 fpm under wet conditions, but then decreased at higher speeds. Grit size had no effect on the fatigue life.

  14. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  15. A phase space model of Fourier ptychographic microscopy

    PubMed Central

    Horstmeyer, Roarke; Yang, Changhuei

    2014-01-01

    A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995

  16. Effect of ventilation velocity on hexavalent chromium and isocyanate exposures in aircraft paint spraying.

    PubMed

    Bennett, James; Marlow, David; Nourian, Fariba; Breay, James; Feng, Amy; Methner, Mark

    2018-03-01

    Exposure control system performance was evaluated during aircraft paint spraying at a military facility. Computational fluid dynamics (CFD) modeling, tracer gas testing, and exposure monitoring examined contaminant exposure vs. crossflow ventilation velocity. CFD modeling using the RNG k-ϵ turbulence model showed exposures to simulated methyl isobutyl ketone of 294 and 83.6 ppm, as a spatial average of five worker locations, for velocities of 0.508 and 0.381 m/s (100 and 75 fpm), respectively. In tracer gas experiments, observed supply/exhaust velocities of 0.706/0.503 m/s (136/99 fpm) were termed full-flow, and reduced velocities were termed 3/4-flow and half-flow. Half-flow showed higher tracer gas concentrations than 3/4-flow, which had the lowest time-averaged concentration, with difference in log means significant at the 95% confidence level. Half-flow compared to full-flow and 3/4-flow compared to full-flow showed no statistically significant difference. CFD modeling using these ventilation conditions agreed closely with the tracer results for the full-flow and 3/4-flow comparison, yet not for the 3/4-flow and half-flow comparison. Full-flow conditions at the painting facility produced a velocity of 0.528 m/s (104 fpm) midway between supply and exhaust locations, with the supply rate of 94.4 m 3 /s (200,000 cfm) exceeding the exhaust rate of 68.7 m 3 /s (146,000 cfm). Ventilation modifications to correct this imbalance created a midhangar velocity of 0.406 m/s (80.0 fpm). Personal exposure monitoring for two worker groups-sprayers and sprayer helpers ("hosemen")-compared process duration means for the two velocities. Hexavalent chromium (Cr[VI]) exposures were 500 vs. 360 µg/m 3 for sprayers and 120 vs. 170 µg/m 3 for hosemen, for 0.528 m/s (104 fpm) and 0.406 m/s (80.0 fpm), respectively. Hexamethylene diisocyanate (HDI) monomer means were 32.2 vs. 13.3 µg/m 3 for sprayers and 3.99 vs. 8.42 µg/m 3 for hosemen. Crossflow velocities affected exposures inconsistently, and local work zone velocities were much lower. Aircraft painting contaminant control is accomplished better with the unidirectional crossflow ventilation presented here than with other observed configurations. Exposure limit exceedances for this ideal condition reinforce continued use of personal protective equipment.

  17. [Measurement and estimation of grassland evapotranspiration in a mountainous region at the upper reach of Heihe River basin, China].

    PubMed

    Yang, Yong; Chen, Ren-sheng; Song, Yao-xuan; Liu, Jun-feng; Han, Chun-tan; Liu, Zhang-wen

    2013-04-01

    Evapotranspiration (ET) is an important component of water cycle, but its measurement in high altitude mountainous region is quite difficult, inducing the insufficient understanding on the actual ET in high altitude mountainous region and the effects of ET on this region' s water cycle. In this paper, two small type weighing mini-lysimeters were applied to measure the daily ET in a piece of grassland in a high altitude mountainous region of the Heihe River basin from July 1st, 2009 to June 30th, 2010. Based on the measured data, the methods of FAO-56 Penman-Monteith (F-P-M), Priestley-Taylor (P-T), and Hargreaves-Samani (H-S) were employed to estimate the ET to analyze the applicability of the three methods for the mountainous region, and the pan coefficient at the measurement spots was discussed. During the measurement period, the total annual ET at the measurement spots was 439.9 mm, accounting for 96.5% of the precipitation in the same period, and the ET showed an obvious seasonal distribution, being 389. 3 mm in May-October, accounting for 88. 5% of the annual value. All the three methods could be well applied to estimate the summer ET but not the winter ET, and their applicability followed the sequence of P-T > F-P-M > H-S. At the measurement spots, the daily pan coefficient in summer was 0.7-0. 8, while that in winter was quite variable.

  18. A note of effects of kiln stick thickness and air velocity on drying time of southern pine 2 by 4 and 2 by 6 lumber

    Treesearch

    E.W. Price; P. Koch

    1982-01-01

    To dry to 10% moisture content, 4- and 6-inch-wide lumber 1.75 inch thick required about 13.7 h (including 4 3/4-h kiln warmup time) in 5-ft-wide loads at 260 F (wet-bulb temperature was 180 F) on 1.00-inch-thick sticks with air cross-circulated at 1,000 fpm. If air velocity is increased to 1,400 fpm or stick thickness increased to 1.5 inches, kiln time required to...

  19. Effects Of Local Oscillator Errors On Digital Beamforming

    DTIC Science & Technology

    2016-03-01

    processor EF element factor EW electronic warfare FFM flicker frequency modulation FOV field-of-view FPGA field-programmable gate array FPM flicker...frequencies and also more difficult to measure [15]. 2. Flicker frequency modulation The source for flicker frequency modulation ( FFM ) is attributed to...a physical resonance mechanism of an oscillator or issues controlling electronic components. Some oscillators might not show FFM noise, which might

  20. Fractal propagation method enables realistic optical microscopy simulations in biological tissues

    PubMed Central

    Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.

    2017-01-01

    Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499

  1. Impacts of environmental pressures on the reproductive physiology of subpopulations of black rhinoceros (Diceros bicornis bicornis) in Addo Elephant National Park, South Africa

    PubMed Central

    Freeman, Elizabeth W.; Meyer, Jordana M.; Bird, Jed; Adendorff, John; Schulte, Bruce A.; Santymire, Rachel M.

    2014-01-01

    Black rhinoceros are an icon for international conservation, yet little is known about their physiology due to their secretive nature. To overcome these challenges, non-invasive methods were used to monitor rhinoceros in two sections of Addo Elephant National Park, South Africa, namely Addo and Nyathi. These sections were separated by a public road, and the numbers of elephants, predators and tourists were higher in Addo. Faecal samples (n = 231) were collected (from July 2007 to November 2010) from known individuals and analysed for progestagen and androgen metabolite (FPM and FAM, respectively) concentrations. As biotic factors could impact reproduction, we predicted that demographics, FPM and FAM would vary between sections and with respect to season (calendar and wet/dry), climate and age of the rhinoceros. Mean FPM concentrations from pregnant females were seven times higher (P < 0.05) than samples from non-pregnant rhinoceros. Positive relationships were found between monthly temperatures and FPM from non-pregnant females (r2 = 0.25, P = 0.03) and the percentage of calves born (r = 0.609, P = 0.04). Although FAM peaked in the spring, when the majority of calves (40%) were conceived, no seasonal patterns in male androgen concentrations were found with respect to month of conception and parturition. Females in Addo had a longer inter-calving interval and were less likely to be pregnant (P < 0.05) compared with those in Nyathi. The biotic stressors (e.g. predators and more competitors) within Addo section could be affecting the reproductive physiology of the rhinoceros negatively. Enhanced knowledge about how black rhinoceros populations respond to environmental stressors could guide management strategies for improving reproduction. PMID:27293618

  2. Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.

    PubMed

    Besio, W; Aakula, R; Dai, W

    2004-01-01

    Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.

  3. Fat-plug myringoplasty of ear lobule vs abdominal donor sites.

    PubMed

    Acar, Mustafa; Yazıcı, Demet; San, Turhan; Muluk, Nuray Bayar; Cingi, Cemal

    2015-04-01

    The purpose of this study is to compare the success rates of fat-graft myringoplasties harvesting adipose grafts from different donor sites (ear lobule vs abdomen). The clinical records of 61 patients (24 males and 37 females) who underwent fat-plug myringoplasty (FPM) were reviewed retrospectively. Fat from ear lobule (FEL) and abdominal fat were used as graft materials. The impact of age, gender, systemic diseases, topography of the perforation, utilization of fat graft materials of different origin on the tympanic membrane closure rate and the effect of FPM on hearing gain was analyzed. Our tympanic membrane (TM) closure rate was 82 %. No statistical significant difference was observed regarding age, gender, comorbidities (septal deviation, hypertension and diabetes mellitus) or habits (smoking). Posterior TM perforations had significantly lower healing rate. The change in TM closure rate considering different adipose tissue donor sites was not statistically significant. The hearing gain of the patients was mostly below 20 dB. Fat-plug myringoplasty (FPM) is a safe, cost-effective and easy operation for selected patients. Abdominal fat graft is as effective as ear lobe fat graft on tympanic membrane healing, has cosmetic advantages and should be taken into consideration when planning fat as the graft source.

  4. Ethanol oxidation and the inhibition by drugs in human liver, stomach and small intestine: Quantitative assessment with numerical organ modeling of alcohol dehydrogenase isozymes.

    PubMed

    Chi, Yu-Chou; Lee, Shou-Lun; Lai, Ching-Long; Lee, Yung-Pin; Lee, Shiao-Pieng; Chiang, Chien-Ping; Yin, Shih-Jiun

    2016-10-25

    Alcohol dehydrogenase (ADH) is the principal enzyme responsible for metabolism of ethanol. Human ADH constitutes a complex isozyme family with striking variations in kinetic function and tissue distribution. Liver and gastrointestinal tract are the major sites for first-pass metabolism (FPM). Their relative contributions to alcohol FPM and degrees of the inhibitions by aspirin and its metabolite salicylate, acetaminophen and cimetidine remain controversial. To address this issue, mathematical organ modeling of ethanol-oxidizing activities in target tissues and that of the ethanol-drug interactions were constructed by linear combination of the corresponding numerical rate equations of tissue constituent ADH isozymes with the documented isozyme protein contents, kinetic parameters for ethanol oxidation and the drug inhibitions of ADH isozymes/allozymes that were determined in 0.1 M sodium phosphate at pH 7.5 and 25 °C containing 0.5 mM NAD(+). The organ simulations reveal that the ADH activities in mucosae of the stomach, duodenum and jejunum with ADH1C*1/*1 genotype are less than 1%, respectively, that of the ADH1B*1/*1-ADH1C*1/*1 liver at 1-200 mM ethanol, indicating that liver is major site of the FPM. The apparent hepatic KM and Vmax for ethanol oxidation are simulated to be 0.093 ± 0.019 mM and 4.0 ± 0.1 mmol/min, respectively. At 95% clearance in liver, the logarithmic average sinusoidal ethanol concentration is determined to be 0.80 mM in accordance with the flow-limited gradient perfusion model. The organ simulations indicate that higher therapeutic acetaminophen (0.5 mM) inhibits 16% of ADH1B*1/*1 hepatic ADH activity at 2-20 mM ethanol and that therapeutic salicylate (1.5 mM) inhibits 30-31% of the ADH1B*2/*2 activity, suggesting potential significant inhibitions of ethanol FPM in these allelotypes. The result provides systematic evaluations and predictions by computer simulation on potential ethanol FPM in target tissues and hepatic ethanol-drug interactions in the context of tissue ADH isozymes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Frequency position modulation using multi-spectral projections

    NASA Astrophysics Data System (ADS)

    Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory

    2012-10-01

    In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.

  6. Risk of respiratory and cardiovascular hospitalisation with exposure to bushfire particulates: new evidence from Darwin, Australia.

    PubMed

    Crabbe, Helen

    2012-12-01

    The risk of hospitalisation from bushfire exposure events in Darwin, Australia, is examined. Several local studies have found evidence for the effects of exposure to bushfire particulates on respiratory and cardiovascular hospital admissions. They have characterised the risk of admission from seasonal exposures to biomass air pollution. A new, unanalysed data set presented an additional chance to examine unique exposure effects, as there are no anthropogenic sources of particulates in the vicinity of the exposure monitor. The incidence of daily counts of hospital admissions for respiratory and cardiovascular diagnoses was calculated with respect to exposures of particulate matter (PM(10)), course particulate matter, fine particulate matter (FPM) and black carbon composition. A Poisson model was used to calculate unadjusted (crude) measures of effect and then adjusted for known risk factors and confounders. The final model adjusted for the effects of minimum temperature, relative humidity, a smoothed spline for seasonal effects, 'date' for a linear effect over time, day of the week and public and school holidays. A subset analysis adjusted for an influenza epidemic in a particular year. The main findings suggest that respiratory admissions were associated with exposure to PM(10) with a lag of 1 day when adjusted for flu and other confounders (RR = 1.025, 95 % CI 1.000-1.051, p < 0.05). This effect is strongest for exposure to FPM concentrations (RR = 1.091, 95 % CI 1.023-1.163, p < 0.01) when adjusted for flu. Respiratory admissions were also associated with black carbon concentrations recorded the previous day (RR = 1.0004, 95 % CI 1.000-1.0008, p < 0.05), which did not change strength when adjusted for flu. Cardiovascular admissions had the strongest association with exposure to same-day PM and highest RR for exposure to FPM when adjusted for confounders (RR = 1.044, 95 % CI 0.989-1.102). Consistent risks were also found with exposure to black carbon with lags of 0-3 days.

  7. A data fusion framework for floodplain analysis using GIS and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Necsoiu, Dorel Marius

    Throughout history floods have been part of the human experience. They are recurring phenomena that form a necessary and enduring feature of all river basin and lowland coastal systems. In an average year, they benefit millions of people who depend on them. In the more developed countries, major floods can be the largest cause of economic losses from natural disasters, and are also a major cause of disaster-related deaths in the less developed countries. Flood disaster mitigation research was conducted to determine how remotely sensed data can effectively be used to produce accurate flood plain maps (FPMs), and to identify/quantify the sources of error associated with such data. Differences were analyzed between flood maps produced by an automated remote sensing analysis tailored to the available satellite remote sensing datasets (rFPM), the 100-year flooded areas "predicted" by the Flood Insurance Rate Maps, and FPMs based on DEM and hydrological data (aFPM). Landuse/landcover was also examined to determine its influence on rFPM errors. These errors were identified and the results were integrated in a GIS to minimize landuse/landcover effects. Two substantial flood events were analyzed. These events were selected because of their similar characteristics (i.e., the existence of FIRM or Q3 data; flood data which included flood peaks, rating curves, and flood profiles; and DEM and remote sensing imagery). Automatic feature extraction was determined to be an important component for successful flood analysis. A process network, in conjunction with domain specific information, was used to map raw remotely sensed data onto a representation that is more compatible with a GIS data model. From a practical point of view, rFPM provides a way to automatically match existing data models to the type of remote sensing data available for each event under investigation. Overall, results showed how remote sensing could contribute to the complex problem of flood management by providing an efficient way to revise the National Flood Insurance Program maps.

  8. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 106 cm-1) the limits were exceeded.

  9. Comparison of anti-EGFR-Fab’ conjugated immunoliposomes modified with two different conjugation linkers for siRNa delivery in SMMC-7721 cells

    PubMed Central

    Deng, Li; Zhang, Yingying; Ma, Lulu; Jing, Xiaolong; Ke, Xingfa; Lian, Jianhao; Zhao, Qiang; Yan, Bo; Zhang, Jinfeng; Yao, Jianzhong; Chen, Jianming

    2013-01-01

    Background Targeted liposome-polycation-DNA complex (LPD), mainly conjugated with antibodies using functionalized PEG derivatives, is an effective nanovector for systemic delivery of small interference RNA (siRNA). However, there are few studies reporting the effect of different conjugation linkers on LPD for gene silencing. To clarify the influence of antibody conjugation linkers on LPD, we prepared two different immunoliposomes to deliver siRNA in which DSPE-PEG-COOH and DSPE-PEG-MAL, the commonly used PEG derivative linkers, were used to conjugate anti-EGFR Fab’ with the liposome. Methods First, 600 μg of anti-EGFR Fab’ was conjugated with 28.35 μL of a micelle solution containing DSPE-PEG-MAL or DSPE-PEG-COOH, and then post inserted into the prepared LPD. Various liposome parameters, including particle size, zeta potential, stability, and encapsulation efficiency were evaluated, and the targeting ability and gene silencing activity of TLPD-FPC (DSPE-PEG-COOH conjugated with Fab’) was compared with that of TLPD-FPM (DSPE-PEG-MAL conjugated with Fab’) in SMMC-7721 hepatocellular carcinoma cells. Results There was no significant difference in particle size between the two TLPDs, but the zeta potential was significantly different. Further, although there was no significant difference in siRNA encapsulation efficiency, cell viability, or serum stability between TLPD-FPM and TLPD-FPC, cellular uptake of TLPD-FPM was significantly greater than that of TLPD-FPC in EGFR-overexpressing SMMC-7721 cells. The luciferase gene silencing efficiency of TLPD-FPM was approximately three-fold high than that of TLPD-FPC. Conclusion Different conjugation linkers whereby antibodies are conjugated with LPD can affect the physicochemical properties of LPD and antibody conjugation efficiency, thus directly affecting the gene silencing effect of TLPD. Immunoliposomes prepared by DSPE-PEG-MAL conjugation with anti-EGFR Fab’ are more effective than TLPD containing DSPE-PEG-COOH in targeting hepatocellular carcinoma cells for siRNA delivery. PMID:24023515

  10. Meshfree simulation of avalanches with the Finite Pointset Method (FPM)

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios

    2017-04-01

    Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.

  11. Validation of early GOES-16 ABI on-orbit geometrical calibration accuracy using SNO method

    NASA Astrophysics Data System (ADS)

    Yu, Fangfang; Shao, Xi; Wu, Xiangqian; Kondratovich, Vladimir; Li, Zhengping

    2017-09-01

    The Advanced Baseline Imager (ABI) onboard the GOES-16 satellite, which was launched on 19 November 2016, is the first next-generation geostationary weather instrument in the west hemisphere. It has 16 spectral solar reflective and emissive bands located in three focal plane modules (FPM): one visible and near infrared (VNIR) FPM, one midwave infrared (MWIR), and one longwave infrared (LWIR) FPM. All the ABI bands are geometeorically calibrated with new techniques of Kalman filtering and Global Positioning System (GPS) to determine the accurate spacecraft attitude and orbit configuration to meet the challenging image navigation and registration (INR) requirements of ABI data. This study is to validate the ABI navigation and band-to-band registration (BBR) accuracies using the spectrally matched pixels of the Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) M-band data and the ABI images from the Simultaneous Nadir Observation (SNO) images. The preliminary results showed that during the ABI post-launch product test (PLPT) period, the ABI BBR errors at the y-direction (along the VIIRS track direction) is smaller than at the x-direction (along the VIIRS scan direction). Variations in the ABI BBR calibration residuals and navigation difference to VIIRS can be observed. Note that ABI is not operational yet and the data is experimental and still under testing. Effort is still ongoing to improve the ABI data quality.

  12. Volatile organic compound and semivolatile organic compound outgassing rates for ethylene propylene diene monomer and fluoropolymer seals

    NASA Astrophysics Data System (ADS)

    Pecault, Isabelle Tovena

    2017-11-01

    High-power laser facilities, such as Laser MegaJoule, are currently being operated for inertial confinement fusion experiments. Emission of volatile organic compounds (VOC) and moreover semivolatile organic compounds (SVOCs) from seals in laser environment is of tremendous importance for the optics lifetime and laser performance. That is why all the seals were screening in the same conditions: 48 h at 30°C and three successive cycle of 1.5 h at 50°C. This paper focuses on the qualification test performed on three seals: two ethylene propylene diene monomer (EPDM) and one fluoropolymer (FPM). It is shown that the molded and the extruded EPDM do not outgas the same amount neither the same molecules whereas EPDM and FPM outgas nearly the same level of phthalates.

  13. Effect of sampling volume on dry powder inhaler (DPI)-emitted aerosol aerodynamic particle size distributions (APSDs) measured by the Next-Generation Pharmaceutical Impactor (NGI) and the Andersen eight-stage cascade impactor (ACI).

    PubMed

    Mohammed, Hlack; Roberts, Daryl L; Copley, Mark; Hammond, Mark; Nichols, Steven C; Mitchell, Jolyon P

    2012-09-01

    Current pharmacopeial methods for testing dry powder inhalers (DPIs) require that 4.0 L be drawn through the inhaler to quantify aerodynamic particle size distribution of "inhaled" particles. This volume comfortably exceeds the internal dead volume of the Andersen eight-stage cascade impactor (ACI) and Next Generation pharmaceutical Impactor (NGI) as designated multistage cascade impactors. Two DPIs, the second (DPI-B) having similar resistance than the first (DPI-A) were used to evaluate ACI and NGI performance at 60 L/min following the methodology described in the European and United States Pharmacopeias. At sampling times ≥2 s (equivalent to volumes ≥2.0 L), both impactors provided consistent measures of therapeutically important fine particle mass (FPM) from both DPIs, independent of sample duration. At shorter sample times, FPM decreased substantially with the NGI, indicative of incomplete aerosol bolus transfer through the system whose dead space was 2.025 L. However, the ACI provided consistent measures of both variables across the range of sampled volumes evaluated, even when this volume was less than 50% of its internal dead space of 1.155 L. Such behavior may be indicative of maldistribution of the flow profile from the relatively narrow exit of the induction port to the uppermost stage of the impactor at start-up. An explanation of the ACI anomalous behavior from first principles requires resolution of the rapidly changing unsteady flow and pressure conditions at start up, and is the subject of ongoing research by the European Pharmaceutical Aerosol Group. Meanwhile, these experimental findings are provided to advocate a prudent approach by retaining the current pharmacopeial methodology.

  14. Characterization of Particulate Matter from a Heavily Industrial Environment

    NASA Astrophysics Data System (ADS)

    Valarini, Simone; Ynoue, Rita Yuri

    2011-01-01

    A characterization of PM aerosols collected in Cubatão, Brazil is presented. Throughout 2009, 5 sampling campaings were carried out at CEPEMA (Centro de Capacitação e Pesquisa em Meio Ambiente da Universidade de São Paulo), in the vicinity of PETROBRAS oil refinery. Mini-vol portable air sampler was deployed to collect coarse and fine particles. Size-fractionated particle samples were collected by a Micro-Orifice Uniform Deposition Impactor (MOUDI) device. Gravimetric analysis showed three peaks for mass size distributions: the After-Filter stage (cut point diameter of less than 0,1μm), stage 7A (d=0,32μm) and stage 3A (d= 3,2μm). Fine particle matter (FPM) concentrations were almost always lower than coarse particle matter (CPM) concentrations. Comparison between the PM2.5 (particulate matter lower than 2.5μg.m-3) measurements by the MOUDI and Mini-Vol sampler reveals good agreement. However, MOUDI underestimates CPM. Reflectance analysis showed that almost all the Black Carbon is found in the Mini-Vol FPM and lower stages of the MOUDI, with higher concentrations at the After-Filter. The atmospheric loading of PM 2.5 was elevated at night, mainly due to more stable atmospheric conditions. Aerosol samples were analyzed for water- soluble ions, black carbon (BC), and trace elements using a number of analytical techniques.

  15. Hexavalent chromium and isocyanate exposures during military aircraft painting under crossflow ventilation.

    PubMed

    Bennett, James S; Marlow, David A; Nourian, Fariba; Breay, James; Hammond, Duane

    2016-01-01

    Exposure control systems performance was investigated in an aircraft painting hangar. The ability of the ventilation system and respiratory protection program to limit worker exposures was examined through air sampling during painting of F/A-18C/D strike fighter aircraft, in four field surveys. Air velocities were measured across the supply filter, exhaust filter, and hangar midplane under crossflow ventilation. Air sampling conducted during painting process phases (wipe-down, primer spraying, and topcoat spraying) encompassed volatile organic compounds, total particulate matter, Cr[VI], metals, nitroethane, and hexamethylene diisocyanate, for two worker groups: sprayers and sprayer helpers ("hosemen"). One of six methyl ethyl ketone and two of six methyl isobutyl ketone samples exceeded the short term exposure limits of 300 and 75 ppm, with means 57 ppm and 63 ppm, respectively. All 12 Cr[VI] 8-hr time-weighted averages exceeded the recommended exposure limit of 1 µg/m3, 11 out of 12 exceeded the permissible exposure limit of 5 µg/m3, and 7 out of 12 exceeded the threshold limit value of 10 µg/m3, with means 38 µg/m3 for sprayers and 8.3 µg/m3 for hosemen. Hexamethylene diisocyanate means were 5.95 µg/m3 for sprayers and 0.645 µg/m3 for hosemen. Total reactive isocyanate group--the total of monomer and oligomer as NCO group mass--showed 6 of 15 personal samples exceeded the United Kingdom Health and Safety Executive workplace exposure limit of 20 µg/m3, with means 50.9 µg/m3 for sprayers and 7.29 µg/m3 for hosemen. Several exposure limits were exceeded, reinforcing continued use of personal protective equipment. The supply rate, 94.4 m3/s (200,000 cfm), produced a velocity of 8.58 m/s (157 fpm) at the supply filter, while the exhaust rate, 68.7 m3/s (146,000 cfm), drew 1.34 m/s (264 fpm) at the exhaust filter. Midway between supply and exhaust locations, the velocity was 0.528 m/s (104 fpm). Supply rate exceeding exhaust rate created re-circulations, turbulence, and fugitive emissions, while wasting energy. Smoke releases showing more effective ventilation here than in other aircraft painting facilities carries technical feasibility relevance.

  16. Hexavalent Chromium and Isocyanate Exposures during Military Aircraft Painting under Crossflow Ventilation

    PubMed Central

    Bennett, James S.; Marlow, David A.; Nourian, Fariba; Breay, James; Hammond, Duane

    2016-01-01

    Exposure control systems performance was investigated in an aircraft painting hangar. The ability of the ventilation system and respiratory protection program to limit worker exposures was examined through air sampling during painting of F/A-18C/D strike fighter aircraft, in four field surveys. Air velocities were measured across the supply filter, exhaust filter, and hangar midplane under crossflow ventilation. Air sampling conducted during painting process phases (wipe-down, primer spraying, and topcoat spraying) encompassed volatile organic compounds, total particulate matter, Cr[VI], metals, nitroethane, and hexamethylene diisocyanate, for two worker groups: sprayers and sprayer helpers (“hosemen”). One of six methyl ethyl ketone and two of six methyl isobutyl ketone samples exceeded the short term exposure limits of 300 and 75 ppm, with means 57 ppm and 63 ppm, respectively. All 12 Cr[VI] 8-hr time-weighted averages exceeded the recommended exposure limit of 1 µg/m3, 11 out of 12 exceeded the permissible exposure limit of 5 µg/m3, and 7 out of 12 exceeded the threshold limit value of 10 µg/m3, with means 38 µg/m3 for sprayers and 8.3 µg/m3 for hosemen. Hexamethylene diisocyanate means were 5.95 µg/m3 for sprayers and 0.645 µg/m3 for hosemen. Total reactive isocyanate group—the total of monomer and oligomer as NCO group mass—showed six of 15 personal samples exceeded the United Kingdom Health and Safety Executive workplace exposure limit of 20 µg/m3, with means 50.9 µg/m3 for sprayers and 7.29 µg/m3 for hosemen. Several exposure limits were exceeded, reinforcing continued use of personal protective equipment. The supply rate, 94.4 m3/s (200,000 cfm), produced a velocity of 8.58 m/s (157 fpm) at the supply filter, while the exhaust rate, 68.7 m3/s (146,000 cfm), drew 1.34 m/s (264 fpm) at the exhaust filter. Midway between supply and exhaust locations, the velocity was 0.528 m/s (104 fpm). Supply rate exceeding exhaust rate created re-circulations, turbulence, and fugitive emissions, while wasting energy. Smoke releases showing more effective ventilation here than in other aircraft painting facilities carries technical feasibility relevance. PMID:26698920

  17. Encystment of parasitic freshwater pearl mussel (Margaritifera margaritifera) larvae coincides with increased metabolic rate and haematocrit in juvenile brown trout (Salmo trutta).

    PubMed

    Filipsson, Karl; Brijs, Jeroen; Näslund, Joacim; Wengström, Niklas; Adamsson, Marie; Závorka, Libor; Österling, E Martin; Höjesjö, Johan

    2017-04-01

    Gill parasites on fish are likely to negatively influence their host by inhibiting respiration, oxygen transport capacity and overall fitness. The glochidia larvae of the endangered freshwater pearl mussel (FPM, Margaritifera margaritifera (Linnaeus, 1758)) are obligate parasites on the gills of juvenile salmonid fish. We investigated the effects of FPM glochidia encystment on the metabolism and haematology of brown trout (Salmo trutta Linnaeus, 1758). Specifically, we measured whole-animal oxygen uptake rates at rest and following an exhaustive exercise protocol using intermittent flow-through respirometry, as well as haematocrit, in infested and uninfested trout. Glochidia encystment significantly affected whole-animal metabolic rate, as infested trout exhibited higher standard and maximum metabolic rates. Furthermore, glochidia-infested trout also had elevated levels of haematocrit. The combination of an increased metabolism and haematocrit in infested fish indicates that glochidia encystment has a physiological effect on the trout, perhaps as a compensatory response to the potential respiratory stress caused by the glochidia. When relating glochidia load to metabolism and haematocrit, fish with low numbers of encysted glochidia were the ones with particularly elevated metabolism and haematocrit. Standard metabolic rate decreased with substantial glochidia loads towards levels similar to those of uninfested fish. This suggests that initial effects visible at low levels of encystment may be countered by additional physiological effects at high loads, e.g. potential changes in energy utilization, and also that high numbers of glochidia may restrict oxygen uptake by the gills.

  18. High-speed and high-resolution quantitative phase imaging with digital-micromirror device-based illumination (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; Jin, Di; Yaqoob, Zahid; So, Peter T. C.

    2017-02-01

    Due to the large number of available mirrors, the patterning speed, low-cost, and compactness, digital-micromirror devices (DMDs) have been extensively used in biomedical imaging system. Recently, DMDs have been brought to the quantitative phase microscopy (QPM) field to achieve synthetic-aperture imaging and tomographic imaging. Last year, our group demonstrated using DMD for QPM, where the phase-retrieval is based on a recently developed Fourier ptychography algorithm. In our previous system, the illumination angle was varied through coding the aperture plane of the illumination system, which has a low efficiency on utilizing the laser power. In our new DMD-based QPM system, we use the Lee-holograms, which is conjugated to the sample plane, to change the illumination angles for much higher power efficiency. Multiple-angle illumination can also be achieved with this method. With this versatile system, we can achieve FPM-based high-resolution phase imaging with 250 nm lateral resolution using the Rayleigh criteria. Due to the use of a powerful laser, the imaging speed would only be limited by the camera acquisition speed. With a fast camera, we expect to achieve close to 100 fps phase imaging speed that has not been achieved in current FPM imaging systems. By adding reference beam, we also expect to achieve synthetic-aperture imaging while directly measuring the phase of the sample fields. This would reduce the phase-retrieval processing time to allow for real-time imaging applications in the future.

  19. Indoor Measurements of Environmental Tobacco Smoke Final Report to the Tobacco Related Disease Research Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, Michael G.; Gundel, Lara A.; Dod, Raymond L.

    2004-03-02

    The objective of this research project was to improve the basis for estimating environmental tobacco smoke (ETS) exposures in a variety of indoor environments. The research utilized experiments conducted in both laboratory and ''real-world'' buildings to (1) study the transport of ETS species from room to room, (2) examine the viability of using various chemical markers as tracers for ETS, and (3) to evaluate to what extent re-emission of ETS components from indoor surfaces might add to the ETS exposure estimates. A three-room environmental chamber was used to examine multi-zone transport and behavior of ETS and its tracers. One roommore » (simulating a smoker's living room) was extensively conditioned with ETS, while a corridor and a second room (simulating a child's bedroom) remained smoking-free. A series of 5 sets of replicate experiments were conducted under different door opening and flow configurations: sealed, leaky, slightly ajar, wide open, and under forced air-flow conditions. When the doors between the rooms were slightly ajar the particles dispersed into the other rooms, eventually reaching the same concentration. The particle size distribution took the same form in each room, although the total numbers of particles in each room depended on the door configurations. The particle number size distribution moved towards somewhat larger particles as the ETS aged. We also successfully modeled the inter-room transport of ETS particles from first principles--using size fractionated particle emission factors, predicted deposition rates, and thermal temperature gradient driven inter-room flows, This validation improved our understanding of bulk inter-room ETS particle transport. Four chemical tracers were examined: ultraviolet-absorbing particulate matter (UVPM), fluorescent particulate matter (FPM), nicotine and solanesol. Both (UVPM) and (FPM) traced the transport of ETS particles into the non-smoking areas. Nicotine, on the other hand, quickly adsorbed on unconditioned surfaces so that nicotine concentrations in these rooms remained very low, even during smoking episodes. These findings suggest that using nicotine as a tracer of ETS particle concentrations may yield misleading concentration and/or exposure estimates. The results of the solanesol analyses were compromised, apparently by exposure to light during collection (lights in the chambers were always on during the experiments). This may mean that the use of solanesol as a tracer is impractical in ''real-world'' conditions. In the final phase of the project we conducted measurements of ETS particles and tracers in three residences occupied by smokers who had joined a smoking cessation program. As a pilot study, its objective was to improve our understanding of how ETS aerosols are transported in a small number of homes (and thus, whether limiting smoking to certain areas has an effect on ETS exposures in other parts of the building). As with the chamber studies, we examined whether measurements of various chemical tracers, such as nicotine, solanesol, FPM and UVPM, could be used to accurately predict ETS concentrations and potential exposures in ''real-world'' settings, as has been suggested by several authors. The ultimate goal of these efforts, and a future larger multiple house study, is to improve the basis for estimating ETS exposures to the general public. Because we only studied three houses no firm conclusions can be developed from our data. However, the results for the ETS tracers are essentially the same as those for the chamber experiments. The use of nicotine was problematic as a marker for ETS exposure. In the smoking areas of the homes, nicotine appeared to be a suitable indicator; however in the non-smoking regions, nicotine behavior was very inconsistent. The other tracers, UVPM and FPM, provided a better basis for estimating ETS exposures in the ''real world''. The use of solanesol was compromised--as it had been in the chamber experiments.« less

  20. Development, validation and application of specific primers for analyzing the clostridial diversity in dark fermentation pit mud by PCR-DGGE.

    PubMed

    Hu, Xiao-Long; Wang, Hai-Yan; Wu, Qun; Xu, Yan

    2014-07-01

    In this study, a Clostridia-specific primer set SJ-F and SJ-R, based on the available 16S rRNA genes sequences from database, was successfully designed and authenticated by theoretical and experimental evaluations. It targeted 19 clostridial families and unclassified_Clostridia with different coverage rates. The specificity and universality of novel primer set was tested again using the dark fermentation pit mud (FPM). It was demonstrated that a total of 13 closest relatives including 12 species were affiliated with 7 clostridial genera, respectively. Compared to the well-accepted bacterial universal primer pair P2/P3, five unexpected clostridial genera including Roseburia, Tissierella, Sporanaerobacter, Alkalibacter and Halothermothrix present in the FPM were also revealed. Therefore, this study could provide a good alternative to investigate the clostridial diversity and monitor their population dynamics rapidly and efficiently in various anaerobic environments and dark fermentation systems in future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. HEAT TRANSFER EVALUATION OF HFC-236FA IN CONDENSATION AND EVAPORATION

    EPA Science Inventory

    The report gives results of an evaluation of the shell-side heat transfer performance of hydrofluorocarbon (HFC)-236fa, which is considered to be a potential substitute for chlorofluorocarbon (CFC)-114 in Navy shipboard chillers, for both conventional finned [1024- and 1575-fpm (...

  2. Non-invasive assessment of the reproductive cycle in free-ranging female African elephants (Loxodonta africana) treated with a gonadotropin-releasing hormone (GnRH) vaccine for inducing anoestrus.

    PubMed

    Benavides Valades, Gabriela; Ganswindt, Andre; Annandale, Henry; Schulman, Martin L; Bertschinger, Henk J

    2012-08-25

    In southern Africa, various options to manage elephant populations are being considered. Immunocontraception is considered to be the most ethically acceptable and logistically feasible method for control of smaller and confined populations. In this regard, the use of gonadotropin-releasing hormone (GnRH) vaccine has not been investigated in female elephants, although it has been reported to be safe and effective in several domestic and wildlife species. The aims of this study were to monitor the oestrous cycles of free-ranging African elephant cows using faecal progestagen metabolites and to evaluate the efficacy of a GnRH vaccine to induce anoestrus in treated cows. Between May 2009-June 2010, luteal activity of 12 elephant cows was monitored non-invasively using an enzyme immunoassay detecting faecal 5alpha-reduced pregnanes (faecal progestagen metabolites, FPM) on a private game reserve in South Africa. No bulls of breeding age were present on the reserve prior to and for the duration of the study. After a 3-month control period, 8 randomly-selected females were treated twice with 600 micrograms of GnRH vaccine (Improvac®, Pfizer Animal Health, Sandton, South Africa) 5-7 weeks apart. Four of these females had been treated previously with the porcine zona pellucida (pZP) vaccine for four years (2004-2007). All 12 monitored females (8 treated and 4 controls) showed signs of luteal activity as evidenced by FPM concentrations exceeding individual baseline values more than once. A total of 16 oestrous cycles could be identified in 8 cows with four of these within the 13 to 17 weeks range previously reported for captive African elephants. According to the FPM concentrations the GnRH vaccine was unable to induce anoestrus in the treated cows. Overall FPM levels in samples collected during the wet season (mean 4.03 micrograms/gram dry faeces) were significantly higher (P<0.002) than the dry season (mean 2.59 micrograms/gram dry faeces). The GnRH vaccination protocol failed to induce anoestrus in the treated female elephants. These results indicate that irregular oestrous cycles occur amongst free-ranging elephants and are not restricted to elephants in captivity. The relationship between ecological conditions and endocrine activity were confirmed. Free-ranging female elephants were observed to not cycle continuously throughout the year in the absence of adult bulls.

  3. Mast cells contribute to alterations in vascular reactivity and exacerbation of ischemia reperfusion injury following ultrafine PM exposure

    EPA Science Inventory

    Increased ambient fine particulate matter (FPM) concentrations are associated with increased risk for short-term and long-term adverse cardiovascular events. Ultrafine PM (UFPM) due to its size and increased surface area might be particularly toxic. Mast cells are well recognized...

  4. One-day kilning dries pine studs to 10% moisture

    Treesearch

    P. Koch

    1971-01-01

    USDA Southern Forest Experiment Station tested drying southern pine studs in air cross-circulated at velocity of 930 fpm and temperature of 240°F; wet-bulb depression was 80°F. Under same regime, 1" boards dried in 1 0 hours, plus time to condition.

  5. Indoor air quality in homes, offices and restaurants in Korean urban areas—indoor/outdoor relationships

    NASA Astrophysics Data System (ADS)

    Baek, Sung-Ok; Kim, Yoon-Shin; Perry, Roger

    Air quality monitoring was carried out to collect data on the levels of various indoor and ambient air constituents in two cities in Korea (Seoul and Taegu). Sampling was conducted simultaneously indoors and outdoors at six residences, six offices and six restaurants in each city during summer 1994 and winter 1994-1995. Measured pollutants were respirable suspended particulate matter (RSP), carbon monoxide (CO), carbon dioxide (CO 2), nitrogen dioxide (NO 2), and a range of volatile organic compounds (VOCs). In addition, in order to evaluate the effect of smoking on indoor air quality, analyses of parameters associated with environmental tobacco smoke (ETS) were undertaken, which are nicotine, ultraviolet (UVPM), fluorescence (FPM) and solanesol particulate matter (SolPM). The results of this study have confirmed the importance of ambient air in determining the quality of air indoors in two major Korean cities. The majority of VOCs measured in both indoor and outdoor environments were derived from outdoor sources, probably motor vehicles. Benzene and other VOC concentrations were much higher during the winter months than the summer months and were not significantly greater in the smoking sites examined. Heating and cooking practices, coupled with generally inadequate ventilation, also were shown to influence indoor air quality. In smoking sites, ETS appears to be a minor contributor to VOC levels as no statistically significant relationships were identified with ETS components and VOCs, whereas very strong correlations were found between indoor and outdoor levels of vehicle-related pollutants. The average contribution of ETS to total RSP concentrations was estimated to range from 10 to 20%.

  6. Variation in wing pattern and palatability in a female-limited polymorphic mimicry system

    PubMed Central

    Long, Elizabeth C; Hahn, Thomas P; Shapiro, Arthur M

    2014-01-01

    Checkerspot butterflies in the genera Euphydryas and Chlosyne exhibit phenotypic polymorphisms along a well-defined latitudinal and elevational gradient in California. The patterns of phenotypic variation in Euphydryas chalcedona, Chlosyne palla, and Chlosyne hoffmanni suggest a mimetic relationship; in addition, the specific patterns of variation in C. palla suggest a female-limited polymorphic mimicry system (FPM). However, the existence of polymorphic models runs counter to predictions of mimicry theory. Palatability trials were undertaken to assess whether or not the different color morphs of each species were distasteful or toxic to a generalized avian predator, the European starling (Sturnus vulgaris). Results indicate that the black morph of E. chalcedona is distasteful, but not toxic, to predators, while the red morph is palatable. C . hoffmanni and both color morphs of C. palla are palatable to predators. Predators that learn to reject black E. chalcedona also reject black C. palla, suggesting that the latter is a FPM of the former. C. hoffmanni does not appear to be involved in this mimetic relationship. PMID:25512850

  7. The Effectiveness Evaluation among Different Player-Matching Mechanisms in a Multi-Player Quiz Game

    ERIC Educational Resources Information Center

    Tsai, Fu-Hsing

    2016-01-01

    This study aims to investigate whether different player-matching mechanisms in educational multi-player online games (MOGs) can affect students' learning performance, enjoyment perception and gaming behaviors. Based on the multi-player quiz game, TRIS-Q, developed by Tsai, Tsai and Lin (2015) using a free player-matching (FPM) mechanism, the same…

  8. Adapting the Abbreviated Impactor Measurement (AIM) concept to make appropriate inhaler aerosol measurements to compare with clinical data: a scoping study with the "Alberta" idealized throat (AIT) inlet.

    PubMed

    Mitchell, Jolyon; Copley, Mark; Sizer, Yvonne; Russell, Theresa; Solomon, Derek

    2012-08-01

    The Abbreviated Impactor Measurement (AIM) concept simplifies determination of aerodynamic size metrics for inhaler quality control testing. A similar approach is needed to compare in vitro particle size distribution metrics with human respiratory tract (HRT) deposition. An abbreviated impactor based on the Andersen eight-stage cascade impactor (ACI) was developed having two size-fractionating stages with cut-points at 4.7 and 1.1 μm aerodynamic diameter at 28.3 L/min, to distinguish between coarse (CPM), fine (FPM), and extra-fine (EPM) mass fractions likely to deposit in the oropharynx, airways of the lungs, or be exhaled, respectively. In vitro data were determined for pressurized metered dose inhaler (pMDI)-delivered salbutamol (100 μg/actuation ex valve) with an "Alberta" idealized adult upper airway (throat) inlet (AIM-pHRT). Corresponding benchmark data for a full resolution Andersen eight-stage cascade impactor with "Alberta" idealized throat (ACI-AIT) and ACI-Ph.Eur./USP inlet were obtained with the same product. Mass recoveries (μg/actuation; mean ± SD) were equivalent at 100.5 ± 0.7; 97.2 ± 4.9 and 101.5 ± 9.5 for the AIM-pHRT, ACI-AIT, and ACI-Ph.Eur./USP induction port, respectively [one-way analysis of variance (ANOVA), p=0.64]. Corresponding values of CPM were 59.2 ± 4.2; 58.4 ± 2.4, and 65.6 ± 5.8; the AIT captured larger particles more efficiently than the Ph.Eur./USP induction port, so that less large particle mass was apparent in the upper stages of the ACI-AIT (p ≤ 0.037). Equivalent values of FPM were similar regardless of inlet/abbreviation at 41.3 ± 4.2; 38.7 ± 3.0, and 35.9 ± 3.8 (p=0.054), and EPM measures (1.7 ± 0.3; 2.0 ± 0.5; 2.1 ± 0.3) were also comparable (p=0.32). The AIT inlet significantly increased the capture of the coarse fraction compared with that collected by the Ph.Eur./USP induction port. Measures obtained using the AIM-pHRT apparatus were comparable with those obtained with the ACI-AIT.

  9. Drying hardwoods with impinging jets.

    Treesearch

    Howard N. Rosen

    1980-01-01

    Silver maple, yellow poplar, and black walnut lumber was dried in a prototype jet dryer over a range of temperatures from 120 degrees to 400 degrees Fahrenheit and air velocities from 1,000 to 9,000 fpm. Different drying schedules were developed for each type of wood. The quality of the jet-dried lumber was good and compared favorably with kiln-dried lumber.

  10. O-Ring Installation for Underwater Components and Applications

    DTIC Science & Technology

    1982-04-15

    cure is effected and the heat source removed. AGING -- To undergo changes in physical properties with age or lapse of time. AIR CHECKS -- Surface...the use of heat and pressure, resulting in greatly increased strength and elasticity of rubber -like materials. VULCANIZING AGENT -- A material that...Cross Section Dia -- Diameter EP, EPM, EPDM -- Ethylene-Propylene Rubber F or ’F -- Degrees Fahrenheit FED -- Federal Specification FPM -- Fluorocarbon

  11. A Summer Math and Physics Program for High School Students: Student Performance and Lessons Learned in the Second Year

    ERIC Educational Resources Information Center

    Timme, Nicholas; Baird, Michael; Bennett, Jake; Fry, Jason; Garrison, Lance; Maltese, Adam

    2013-01-01

    For the past two years, the Foundations in Physics and Mathematics (FPM) summer program has been held at Indiana University in order to fulfill two goals: provide additional physics and mathematics instruction at the high school level, and provide physics graduate students with experience and autonomy in designing curricula and teaching courses.…

  12. Composition and diurnal variability of the natural Amazonian aerosol

    NASA Astrophysics Data System (ADS)

    Graham, Bim; Guyon, Pascal; Maenhaut, Willy; Taylor, Philip E.; Ebert, Martin; Matthias-Maser, Sabine; Mayol-Bracero, Olga L.; Godoi, Ricardo H. M.; Artaxo, Paulo; Meixner, Franz X.; Moura, Marcos A. Lima; Rocha, Carlos H. EçA. D'almeida; Grieken, Rene Van; Glovsky, M. Michael; Flagan, Richard C.; Andreae, Meinrat O.

    2003-12-01

    As part of the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA)-Cooperative LBA Airborne Regional Experiment (CLAIRE) 2001 campaign, separate day and nighttime aerosol samples were collected in July 2001 at a ground-based site in Amazonia, Brazil, in order to examine the composition and temporal variability of the natural "background" aerosol. A combination of analytical techniques was used to characterize the elemental and ionic composition of the aerosol. Major particle types larger than ˜0.5 μm were identified by electron and light microscopy. Both the coarse and fine aerosol were found to consist primarily of organic matter (˜70 and 80% by mass, respectively), with the coarse fraction containing small amounts of soil dust and sea-salt particles and the fine fraction containing some non-sea-salt sulfate. Coarse particulate mass concentrations (CPM ≈ PM10 - PM2) were found to be highest at night (average = 3.9 ± 1.4 μg m-3, mean night-to-day ratio = 1.9 ± 0.4), while fine particulate mass concentrations (FPM ≈ PM2) increased during the daytime (average = 2.6 ± 0.8 μg m-3, mean night-to-day ratio = 0.7 ± 0.1). The nocturnal increase in CPM coincided with an increase in primary biological particles in this size range (predominantly yeasts and other fungal spores), resulting from the trapping of surface-derived forest aerosol under a shallow nocturnal boundary layer and a lake-land breeze effect at the site, although active nocturnal sporulation may have also contributed. Associated with this, we observed elevated nighttime concentrations of biogenic elements and ions (P, S, K, Cu, Zn, NH4+) in the CPM fraction. For the FPM fraction a persistently higher daytime concentration of organic carbon was found, which indicates that photochemical production of secondary organic aerosol from biogenic volatile organic compounds may have made a significant contribution to the fine aerosol. Dust and sea-salt-associated elements/ions in the CPM fraction, and non-sea-salt sulfate in the FPM fraction, showed higher daytime concentrations, most likely due to enhanced convective downward mixing of long-range transported aerosol.

  13. A crisis of visibility: The psychological consequences of false-positive screening mammograms, an interview study.

    PubMed

    Bond, Mary; Garside, Ruth; Hyde, Christopher

    2015-11-01

    To understand the meaning of having a false-positive screening mammogram. Qualitative interview study. Twenty-one women, who had experienced false-positive screening mammograms, took part in semi-structured interviews that were analysed with Interpretive Phenomenological Analysis. This research took place in the United Kingdom. The analysis revealed a wide range of response to having a false-positive mammogram, from nonchalance to extreme fear. These reactions come from the potential for the belief that one is healthy to be challenged by being recalled, as the worst is frequently assumed. For most, the image of the lesion on the X-ray brought the reality of this challenge into sharp focus, as they might soon discover they had breast cancer. Waiting, whether for the appointment, at the clinic or for biopsy results was considered the worst aspect of being recalled. Generally, the uncertainty was quickly resolved with the pronouncement of the 'all-clear', which brought considerable relief and the restoration of belief in the healthy self. However, for some, lack of information, contradictory information, or poor interpersonal communication meant that uncertainty about their health status lingered at least until their next normal screening mammogram. Mammography screening related anxiety lasted for up to 12 years. Breast cancer screening produces a 'crisis of visibility'. Accepting the screening invitation is taking a risk that you may experience unnecessary stress, uncertainty, fear, anxiety, and physical pain. Not accepting the invitation is taking a risk that malignant disease will remain invisible. Statement of contribution What is already known on this subject? More than 50,000 women a year in England have a false-positive mammogram (FPM). Having an FPM can cause anxiety compared with a normal mammogram. The anxiety can last up to 35 months. What does this study add? Refocuses attention from the average response found in quantitative studies to the wide range of individual response. Gives insight into the nature of the anxiety of having FPMs. Highlights the role of uncertainty in provoking distress from an FPM. © 2015 The British Psychological Society.

  14. Drying southern pine at 240°F. -- effects of air velocity and humidity, board thickness and density

    Treesearch

    Peter Koch

    1972-01-01

    Kiln time to each 10 percent moisture content was shortened by circulating air at high velocity, but was little affected by board specific gravity. A wet-bulb depression of 80oF. provided faster drying than depressions of 40 or 115oF. At 80 depression and with air circulated at 930 f.p.m., kiln time was directly...

  15. Process for straightening and drying southern pine 2 by 4's in 24 hours

    Treesearch

    Peter Koch

    1971-01-01

    In 21 hours under mechanical restraint and in a kiln providing a cross-circulation velocity of 1,000 f.p.m. at dry-and wet-bulb temperatures of 240 and 160oF., followed by 3 hours at 195 and 185oF., southern pine 2 by 4 studs cut from steamed veneer cores or small logs were dried to 9-percent moisture content (Standard...

  16. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  17. Aircraft Survivability: An Overview of Aircraft Fire Protection, Spring 2008

    DTIC Science & Technology

    2008-01-01

    function, liquid spray ignition, fire initiation, and fire growth and sustainment. For impacts with the tank ullage, the FPM describes the... spray fires are simulated using a two-phase flow model that describes both the droplet lifetime history and the gas phase. In addition to...vulnerability in a fuel tank is through the injection of nitrogen by Onboard Inerting Gas Generation Systems (OBIGGS). Recent research by the FAA has shown

  18. Recurring Reports of Civilian Employment and Payrolls

    DTIC Science & Technology

    1989-09-11

    Department of Defense -, ,• INSTRUCTION September 11, 1989AD-A272 768 NME 701 DA& M SUBJECT: Recurring Reports of Civilian Employment and Payrolls...Positions and Modification of TAPER Authority to Permit Promotion and Reassignment of these Employees," July 30, 1979 ( m ) Federal Personnel Manual (FPM...L. DD32 On-Site Inspection Agency (OSIA) M . DDOI: 1. Office of the Secretary of Defense (OSD) 2. Joint Chiefs of Staff (JCS)* 3. Defense Security

  19. Telegenetics: application of a tele-education program in genetic syndromes for Brazilian students

    PubMed Central

    MAXIMINO, Luciana Paula; PICOLINI-PEREIRA, Mirela Machado; CARVALHO, José Luiz Brito

    2014-01-01

    With the high occurrence of genetic anomalies in Brazil and the manifestations of communication disorders associated with these conditions, the development of educative actions that comprise these illnesses can bring unique benefits in the identification and appropriate treatment of these clinical pictures. Objective The aim of this study was to develop and analyze an educational program in genetic syndromes for elementary students applied in two Brazilian states, using an Interactive Tele-education model. Material and Methods The study was carried out in 4 schools: two in the state of São Paulo, Southeast Region, Brazil, and two in the state of Amazonas, North Region, Brazil. Forty-five students, both genders, aged between 13 and 14 years, of the 9th grade of the basic education of both public and private system, were divided into two groups: 21 of São Paulo Group (SPG) and 24 of Amazonas Group (AMG). The educational program lasted about 3 months and was divided into two stages including both classroom and distance activities on genetic syndromes. The classroom activity was carried out separately in each school, with expository lessons, graphs and audiovisual contents. In the activity at a distance the educational content was presented to students by means of the Interactive Tele-education model. In this stage, the students had access a Cybertutor, using the Young Doctor Project methodology. In order to measure the effectiveness of the educational program, the Problem Situation Questionnaire (PSQ) and the Web Site Motivational Analysis Checklist adapted (FPM) were used. Results The program developed was effective for knowledge acquisition in 80% of the groups. FPM showed a high satisfaction index from the participants in relation to the Interactive Tele-education, evaluating the program as "awesome course". No statistically significant differences between the groups regarding type of school or state were observed. Conclusion Thus, the Tele-Education Program can be used as a tool for educational purposes in genetic syndromes of other populations, in several regions of Brazil. PMID:25591016

  20. Drying southern pine at 240°F-- effects of air velocity and humidity, board thickness and density

    Treesearch

    P. Koch

    1972-01-01

    Kiln time to reach 10 percent moisture content was shortened by circulating air at high velocity, but was little affected by board specific gravity. A wet-bulb depression of 80°F. provided faster drying than depressions of 40 or 115°F. At 80° depression and with air circulated at 930 f.p.m.. kiln time was directly proportional to board thickness. Under these optimum...

  1. Installation of a flow control device in an inclined air-curtain fume hood to control wake-induced exposure.

    PubMed

    Chen, Jia-Kun

    2016-08-01

    An inclined plate for flow control was installed at the lower edge of the sash of an inclined air-curtain fume hood to reduce the effects of the wake around a worker standing in front of the fume hood. Flow inside the fume hood is controlled by the inclined air-curtain and deflection plates, thereby forming a quad-vortex flow structure. Controlling the face velocity of the fume hood resulted in convex, straight, concave, and attachment flow profiles in the inclined air-curtain. We used the flow visualization and conducted a tracer gas test with a mannequin to determine the performance of two sash geometries, namely, the half-cylinder and inclined plate designs. When the half-cylinder design was used, the tracer gas test registered a high leakage concentration at Vf ≦ 57.1 fpm or less. This concentration occurred at the top of the sash opening, which was close to the breathing zone of the mannequin placed in front of the fume hood. When the inclined plate design was used, the containment was good, with concentrations of 0.002-0.004 ppm, at Vf ≦ 63.0 fpm. Results indicate that an inclined plate effectively reduces the leakage concentration induced by recirculation flow structures that form in the wake of a worker standing in front of an inclined air-curtain fume hood.

  2. Ultrawide Shipboard Electrooptic Electromagnetic Environment Monitoring

    DTIC Science & Technology

    1994-05-01

    ridge-waveguide modulator has a device length of 300 fpm, a waveguide thickness of 0.4 pm, a device capacitance of 0.2 pF, and a r x- 0.7. For digital ...important noise sources identified. Particular attention will be paid to the performance characteristics of the optical modulator. For digital ...1.32 tM for digital as well as analog optical link applications. The operation of the FKE modulator was discussed in Section 2.1.2 of this report. At

  3. Using Animated Computer Simulation to Determine the Optimal Resource Support for the Endodontic Specialty Practice at Fort Lewis.

    DTIC Science & Technology

    1998-03-01

    Series Pt Endo Tx 114 Time Series Pt Perio Ex 114 None Pt Perio Tx 114 None Pt Perio Sx 114 None Pt Perio Pot 114 None Pt Exam 114 None Pt Other...prevention, diagnosis, and treatment of diseases and injuries that affect the dental pulp, tooth root, and periapical tissue" (Jablonski, 1982...Time Priority Scheduled Disable Logic Entrance 1 480 99 Yes No wait 180 * Entities * Name Speed (fpm) Stats Pt Endo Ex 114 Time

  4. Time to dry 2-, 3-, and 4-inch S4S southern pine at 240°F as related to board width

    Treesearch

    P. Koch

    1974-01-01

    With 80°F wet-bulb depression and air cross-circulated at 1,000 fpm, southern pine in 2-, 3-, and 4-inch thicknesses attained 10 percent moisture content in 22.4, 35.6, and 45.3 hours. In 3- and 4-inch thicknesses, 4-inch-wide lumber required less time to dry than that 8 or 12 inches wide. Surface checks were absent or moderate in all thicknesses and widths. End-...

  5. Centripetal Acceleration Reaction: An Effective and Robust Mechanism for Flapping Flight in Insects

    PubMed Central

    Zhang, Chao; Hedrick, Tyson L.; Mittal, Rajat

    2015-01-01

    Despite intense study by physicists and biologists, we do not fully understand the unsteady aerodynamics that relate insect wing morphology and kinematics to lift generation. Here, we formulate a force partitioning method (FPM) and implement it within a computational fluid dynamic model to provide an unambiguous and physically insightful division of aerodynamic force into components associated with wing kinematics, vorticity, and viscosity. Application of the FPM to hawkmoth and fruit fly flight shows that the leading-edge vortex is the dominant mechanism for lift generation for both these insects and contributes between 72–85% of the net lift. However, there is another, previously unidentified mechanism, the centripetal acceleration reaction, which generates up to 17% of the net lift. The centripetal acceleration reaction is similar to the classical inviscid added-mass in that it depends only on the kinematics (i.e. accelerations) of the body, but is different in that it requires the satisfaction of the no-slip condition, and a combination of tangential motion and rotation of the wing surface. Furthermore, the classical added-mass force is identically zero for cyclic motion but this is not true of the centripetal acceleration reaction. Furthermore, unlike the lift due to vorticity, centripetal acceleration reaction lift is insensitive to Reynolds number and to environmental flow perturbations, making it an important contributor to insect flight stability and miniaturization. This force mechanism also has broad implications for flow-induced deformation and vibration, underwater locomotion and flows involving bubbles and droplets. PMID:26252016

  6. Centripetal Acceleration Reaction: An Effective and Robust Mechanism for Flapping Flight in Insects.

    PubMed

    Zhang, Chao; Hedrick, Tyson L; Mittal, Rajat

    2015-01-01

    Despite intense study by physicists and biologists, we do not fully understand the unsteady aerodynamics that relate insect wing morphology and kinematics to lift generation. Here, we formulate a force partitioning method (FPM) and implement it within a computational fluid dynamic model to provide an unambiguous and physically insightful division of aerodynamic force into components associated with wing kinematics, vorticity, and viscosity. Application of the FPM to hawkmoth and fruit fly flight shows that the leading-edge vortex is the dominant mechanism for lift generation for both these insects and contributes between 72-85% of the net lift. However, there is another, previously unidentified mechanism, the centripetal acceleration reaction, which generates up to 17% of the net lift. The centripetal acceleration reaction is similar to the classical inviscid added-mass in that it depends only on the kinematics (i.e. accelerations) of the body, but is different in that it requires the satisfaction of the no-slip condition, and a combination of tangential motion and rotation of the wing surface. Furthermore, the classical added-mass force is identically zero for cyclic motion but this is not true of the centripetal acceleration reaction. Furthermore, unlike the lift due to vorticity, centripetal acceleration reaction lift is insensitive to Reynolds number and to environmental flow perturbations, making it an important contributor to insect flight stability and miniaturization. This force mechanism also has broad implications for flow-induced deformation and vibration, underwater locomotion and flows involving bubbles and droplets.

  7. Optomechanical design of TMT NFIRAOS Subsystems at INO

    NASA Astrophysics Data System (ADS)

    Lamontagne, Frédéric; Desnoyers, Nichola; Grenier, Martin; Cottin, Pierre; Leclerc, Mélanie; Martin, Olivier; Buteau-Vaillancourt, Louis; Boucher, Marc-André; Nash, Reston; Lardière, Olivier; Andersen, David; Atwood, Jenny; Hill, Alexis; Byrnes, Peter W. G.; Herriot, Glen; Fitzsimmons, Joeleff; Véran, Jean-Pierre

    2017-08-01

    The adaptive optics system for the Thirty Meter Telescope (TMT) is the Narrow-Field InfraRed Adaptive Optics System (NFIRAOS). Recently, INO has been involved in the optomechanical design of several subsystems of NFIRAOS, including the Instrument Selection Mirror (ISM), the NFIRAOS Beamsplitters (NBS), and the NFIRAOS Source Simulator system (NSS) comprising the Focal Plane Mask (FPM), the Laser Guide Star (LGS) sources, and the Natural Guide Star (NGS) sources. This paper presents an overview of these subsystems and the optomechanical design approaches used to meet the optical performance requirements under environmental constraints.

  8. Observations of Total Lightning Associated with Severe Convection During the Wet Season in Central Florida

    NASA Technical Reports Server (NTRS)

    Sharp, D.; Williams, E.; Weber, M.; Goodman, Steven J.; Raghavan, R.; Matlin, A.; Boldi, B.

    1998-01-01

    This paper will discuss findings of a collaborative lightning research project between National Aeronautics and Space Administration, the Massachusetts Institute of Technology and the National Weather Service office In Melbourne Florida. In August 1996, NWS/MLB received a workstation which incorporates data from the KMLB WSR-88D, Cloud to Ground (CG) stroke data from the National Lightning Detection Network (NLDN), and 3D volumetric lightning data collected from the Kennedy Space Centers' Lightning Detection And Ranging (LDAR) lightning system. The two primary objectives of this lightning workstation, called Lightning Imaging Sensor Data Applications Display (USDAD), are to: observe how total lightning relates to severe convective storm morphology over central Florida, and compare ground based total lightning data (LDAR) to a satellite based lightning detection system. This presentation will focus on objective #1. The LISDAD system continuously displays CG and total lighting activity overlaid on top of the KMLB composite reflectivity product. This allows forecasters to monitor total lightning activity associated with convective cells occurring over the central Florida peninsula and adjacent coastal waters. The LISDAD system also keeps track of the amount of total lightning data, and associated KMLB radar products with individual convective cells occurring over the region. By clicking on an individual cell, a history table displays flash rate information (CG and total lightning) in one minute increments, along with radar parameter trends (echo tops, maximum dBz and height of maximum dBz) every 5 minutes. This history table Is updated continuously, without user intervention, as long as the cell is identified. Reviewing data collected during the 1997 wet season (21 cases) revealed that storms which produced severe weather (hall greater or = 0.75 in. or wind damage) typically showed a rapid rise In total lightning prior to the onset of severe weather. On average, flash rate increases of 25 FPM per minute over a time scale of approximately 5 minutes were common. These pulse severe storms typically reached values of 150 to 200 FPM with some cells exceeding 400 FPM. One finding which could have a direct application to the warning process is that the rapid increase in lightning typically occurred in advance of the warning issuance time. Comparisons between the ending time of the rapid rate increase and the time of when the warning was issued by NWS/MLB meteorologist exhibited a lead time of 8 minutes. It is conceivable that if close monitoring of the LISDAD system by operational meteorologist is routinely performed, warnings for pulse severe storms could be issued up to 4 to 6 minutes earlier than what is issued currently.

  9. Dynamics and control of infections on social networks of population types.

    PubMed

    Williams, Brian G; Dye, Christopher

    2018-06-01

    Random mixing in host populations has been a convenient simplifying assumption in the study of epidemics, but neglects important differences in contact rates within and between population groups. For HIV/AIDS, the assumption of random mixing is inappropriate for epidemics that are concentrated in groups of people at high risk, including female sex workers (FSW) and their male clients (MCF), injecting drug users (IDU) and men who have sex with men (MSM). To find out who transmits infection to whom and how that affects the spread and containment of infection remains a major empirical challenge in the epidemiology of HIV/AIDS. Here we develop a technique, based on the routine sampling of infection in linked population groups (a social network of population types), which shows how an HIV/AIDS epidemic in Can Tho Province of Vietnam began in FSW, was propagated mainly by IDU, and ultimately generated most cases among the female partners of MCF (FPM). Calculation of the case reproduction numbers within and between groups, and for the whole network, provides insights into control that cannot be deduced simply from observations on the prevalence of infection. Specifically, the per capita rate of HIV transmission was highest from FSW to MCF, and most HIV infections occurred in FPM, but the number of infections in the whole network is best reduced by interrupting transmission to and from IDU. This analysis can be used to guide HIV/AIDS interventions using needle and syringe exchange, condom distribution and antiretroviral therapy. The method requires only routine data and could be applied to infections in other populations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. High Performance Auxiliary Power Unit Technology Demonstrator.

    DTIC Science & Technology

    1980-12-01

    aft bearings 1.13 P3 - Power producer CDP 1.14 DPHE - Lube pressure drop at heat exchanger 1.15 POFP - Load airflow orifice pressure 1.16 DPOFP - Load...P𔃽I -PSI G PEBL -PSIG P2 -PS.IG DPHE -PID POFP -F Iu 0. 022±_ 77. 3478 6o5. 6 4±4 ±8L-. 4852 19. 51-17.4 DPOFP -PSID Ni -,. N2-i -RPM NSATM -FPM...28. 0250 83. 3505 29. 861 1:9. 7680 PGi -PSIG PEBL -PSIG P3 -PSIG DPHE -PSID POFP -PSIG 0. 0100 77. 9199 72.4862 17. 25 ±19. 4122 1= DPOFP -PSID NI

  11. The Mechanical Design of a Kinematic Mount for the Mid Infrared Instrument Focal Plane Module on the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Thelen, Michael P.; Moore, Donald M.

    2009-01-01

    The detector assembly for the Mid Infrared Instrument (MIRI) of the James Webb Space Telescope (JWST) is mechanically supported in the Focal Plane Module (FPM) Assembly with an efficient hexapod design. The kinematic mount design allows for precision adjustment of the detector boresight to assembly alignment fiducials and maintains optical alignment requirements during flight conditions of launch and cryogenic operations below 7 Kelvin. This kinematic mounting technique is able to be implemented in a variety of optical-mechanical designs and is capable of micron level adjustment control and stability over wide dynamic and temperature ranges.

  12. Escalator design features evaluation

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Deshpande, G. K.

    1982-01-01

    Escalators are available with design features such as dual speed (90 and 120 fpm), mat operation and flat steps. These design features were evaluated based on the impact of each on capital and operating costs, traffic flow, and safety. A human factors engineering model was developed to analyze the need for flat steps at various speeds. Mat operation of escalators was found to be cost effective in terms of energy savings. Dual speed operation of escalators with the higher speed used during peak hours allows for efficient operation. A minimum number of flat steps required as a function of escalator speed was developed to ensure safety for the elderly.

  13. Indoor air quality (IAQ) evaluation of a Novel Tobacco Vapor (NTV) product.

    PubMed

    Ichitsubo, Hirokazu; Kotaki, Misato

    2018-02-01

    The impact of using a Novel Tobacco Vapor (NTV) product on indoor air quality (IAQ) was simulated using an environmentally-controlled chamber. Three environmental simulations were examined; two non-smoking areas (conference room and dining room) and one ventilated smoking area (smoking lounge). IAQ was evaluated by (i) measuring constituents in the mainstream NTV product emissions, (ii) and by determining classical environmental tobacco smoke (ETS) and representative air quality markers. Analysis of the mainstream emissions revealed that vapor from the NTV product is chemically simpler than cigarette smoke. ETS markers (RSP, UVPM, FPM, solanesol, nicotine, 3-ethenylpyridine), volatile organic compound (toluene), carbon monoxide, propylene glycol, glycerol, and triacetin were below the limit of detection or the limit of quantification in both the non-smoking and smoking environments after using the NTV product. The concentrations of ammonia, carbonyls (formaldehyde, acetaldehyde, and acetone), and total volatile organic compounds were the same levels found in the chamber without NTV use. There was no significant increase in the levels of formaldehyde, acetone or ammonia in exhaled breath following NTV use. In summary, under the simulations tested, the NTV product had no measurable effect on the IAQ, in either non-smoking or smoking areas. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. The generic gradient-like structure of certain asymptotically autonomous semilinear parabolic equations

    NASA Astrophysics Data System (ADS)

    Jänig, A.

    2018-05-01

    We consider asymptotically autonomous semilinear parabolic equations u_t + Au = f(t,u). Suppose that $f(t,.)\\to f^\\pm$ as $t\\to\\pm\\infty$, where the semiflows induced by \\label{eq:140602-1511} u_t + Au = f^\\pm(u) \\tag{*} are gradient-like. Under certain assumptions, it is shown that generically with respect to a perturbation $g$ with $g(t)\\to 0$ as $|t|\\to\\infty$, every solution of u_t + Au = f(t,u) + g(t) is a connection between equilibria $e^\\pm$ of \\eqref{eq:140602-1511} with $m(e^-)\\geq m(e^+)$. Moreover, if the Morse indices satisfy $m(e^-) = m(e^+)$, then $u$ is isolated by linearization.

  15. Theoretical study of nanoparticle formation in thermal plasma processing: Nucleation, coagulation and aggregation

    NASA Astrophysics Data System (ADS)

    Mendoza Gonzalez, Norma Yadira

    This work presents a mathematical modeling study of the synthesis of nanoparticles in radio frequency (RF) inductively coupled plasma (ICP) reactors. The purpose is to further investigate the influence of process parameters on the final size and morphology of produced particles. The proposed model involves the calculation of flow and temperature fields of the plasma gas. Evaporation of raw particles is also accounted with the particle trajectory and temperature history calculated with a Lagrangian approach. The nanoparticle formation is considered by homogeneous nucleation and the growth is caused by condensation and Brownian coagulation. The growth of fractal aggregates is considered by introducing a power law exponent Df. Transport of nanoparticles occurs by convection, thermophoresis and Brownian diffusion. The method of moments is used to solve the particle dynamics equation. The model is validated using experimental results from plasma reactors at laboratory scale. The results are presented in the following manner. First, use is made of the computational fluid dynamics software (CFD), Fluent 6.1 with a commercial companion package specifically developped for aerosols named: Fine Particle Model (FPM). This package is used to study the relationship between the operating parameters effect and the properties of the end products at the laboratory scale. Secondly, a coupled hybrid model for the synthesis of spherical particles and fractal aggregates is developped in place of the FPM package. Results obtained from this model will allow to identify the importance of each parameter in defining the morphology of spherical primary particles and fractal aggregates of nanoparticles. The solution of the model was made using the geometries and operating conditions of existing reactors at the Centre de Recherche en Energie, Plasma et Electrochimie (CREPE) of the Universite de Sherbrooke, for which experimental results were obtained experimentally. Additionally, this study demonstrates the importance of the flow and temperature fields on the growth of fractal particles; namely the aggregates.

  16. Inhibition of human alcohol and aldehyde dehydrogenases by aspirin and salicylate: assessment of the effects on first-pass metabolism of ethanol.

    PubMed

    Lee, Shou-Lun; Lee, Yung-Pin; Wu, Min-Li; Chi, Yu-Chou; Liu, Chiu-Ming; Lai, Ching-Long; Yin, Shih-Jiun

    2015-05-01

    Previous studies have reported that aspirin significantly reduced the first-pass metabolism (FPM) of ethanol in humans thereby increasing adverse effects of alcohol. The underlying causes, however, remain poorly understood. Alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH), principal enzymes responsible for metabolism of ethanol, are complex enzyme families that exhibit functional polymorphisms among ethnic groups and distinct tissue distributions. We investigated the inhibition profiles by aspirin and its major metabolite salicylate of ethanol oxidation by recombinant human ADH1A, ADH1B1, ADH1B2, ADH1B3, ADH1C1, ADH1C2, ADH2, and ADH4, and acetaldehyde oxidation by ALDH1A1 and ALDH2, at pH 7.5 and 0.5 mM NAD(+). Competitive inhibition pattern was found to be a predominant type among the ADHs and ALDHs studied, although noncompetitive and uncompetitive inhibitions were also detected in a few cases. The inhibition constants of salicylate for the ADHs and ALDHs were considerably lower than that of aspirin with the exception of ADH1A that can be ascribed to a substitution of Ala-93 at the bottom of substrate pocket as revealed by molecular docking experiments. Kinetic inhibition equation-based simulations show at higher therapeutic levels of blood plasma salicylate (1.5 mM) that the decrease of activities at 2-10 mM ethanol for ADH1A/ADH2 and ADH1B2/ADH1B3 are predicted to be 75-86% and 31-52%, respectively, and that the activity decline for ALDH1A1 and ALDH2 at 10-50 μM acetaldehyde to be 62-73%. Our findings suggest that salicylate may substantially inhibit hepatic FPM of alcohol at both the ADH and ALDH steps when concurrent intaking aspirin. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Evaluating the physiological and behavioral response of a male and female gorilla (Gorilla gorilla gorilla) during an introduction.

    PubMed

    Jacobs, Raechel M; Ross, Stephen R; Wagner, Katherine E; Leahy, Maureen; Meiers, Susan T; Santymire, Rachel M

    2014-01-01

    Prolonged stress responses can lead to infertility and death; therefore monitoring respective indicators like stress-related hormones and behaviors is an important tool in ensuring the health and well-being among zoo-housed animal populations. Changes in social structure, such as the introduction of a new conspecific, can be a source of stress. In April 2010, a sexually mature female western lowland gorilla (Gorilla gorilla gorilla) was brought to Lincoln Park Zoo (LPZ; Chicago, IL) from the Chicago Zoological Park (Brookfield, IL) for a breeding recommendation from the Gorilla Species Survival Plan. Fecal glucocorticoid metabolites (FGMs) were monitored in two gorillas prior to, during and immediately following the social introduction. Reproduction events, such ovarian cyclicity and pregnancy, were monitored using behavior and fecal progestagen metabolite (FPM; female) and fecal androgen metabolite (FAM; male) analyses. Mean (± standard error) FGM concentrations for the male were elevated (P = 0.002) during the introduction (20.61 ± 0.83 ng/g) compared to the pre- and post-introduction phases (11.31 ± 0.48 ng/g and 12.42 ± 0.65 ng/g, respectively). For the female, mean FGM concentrations were lower (P < 0.001) during the post-introduction (17.91 ± 1.07 ng/g) than during the pre- and introduction phases (30.50 ± 3.42 and 27.38 ± 1.51 ng/g, respectively). The female maintained normal FPM cyclicity throughout the study and became pregnant in the post-introduction phase. These results suggest the importance of both behavioral and physiological monitoring of zoo animals and demonstrate the potential stress that can occur during social introductions. Zoo Biol. 33:394-402, 2014. © 2014 Wiley Periodicals Inc. © 2014 Wiley Periodicals, Inc.

  18. Supplemental Citrulline Is More Efficient Than Arginine in Increasing Systemic Arginine Availability in Mice.

    PubMed

    Agarwal, Umang; Didelija, Inka C; Yuan, Yang; Wang, Xiaoying; Marini, Juan C

    2017-04-01

    Background: Arginine is considered to be an essential amino acid in various (patho)physiologic conditions of high demand. However, dietary arginine supplementation suffers from various drawbacks, including extensive first-pass extraction. Citrulline supplementation may be a better alternative than arginine, because its only fate in vivo is conversion into arginine. Objective: The goal of the present research was to determine the relative efficiency of arginine and citrulline supplementation to improve arginine availability. Methods: Six-week-old C57BL/6J male mice fitted with gastric catheters were adapted to 1 of 7 experimental diets for 2 wk. The basal diet contained 2.5 g l-arginine/kg, whereas the supplemented diets contained an additional 2.5, 7.5, and 12.5 g/kg diet of either l-arginine or l-citrulline. On the final day, after a 3-h food deprivation, mice were continuously infused intragastrically with an elemental diet similar to the dietary treatment, along with l-[ 13 C 6 ]arginine, to determine the splanchnic first-pass metabolism (FPM) of arginine. In addition, tracers were continuously infused intravenously to determine the fluxes and interconversions between citrulline and arginine. Linear regression slopes were compared to determine the relative efficiency of each supplement. Results: Whereas all the supplemented citrulline (105% ± 7% SEM) appeared in plasma and resulted in a marginal increase of 86% in arginine flux, supplemental arginine underwent an ∼70% FPM, indicating that only 30% of the supplemental arginine entered the peripheral circulation. However, supplemental arginine did not increase arginine flux. Both supplements linearly increased ( P < 0.01) plasma arginine concentration from 109 μmol/L for the basal diet to 159 and 214 μmol/L for the highest arginine and citrulline supplementation levels, respectively. However, supplemental citrulline increased arginine concentrations to a greater extent (35%, P < 0.01). Conclusions: Citrulline supplementation is more efficient at increasing arginine availability than is arginine supplementation itself in mice. © 2017 American Society for Nutrition.

  19. Supplemental Citrulline Is More Efficient Than Arginine in Increasing Systemic Arginine Availability in Mice123

    PubMed Central

    Agarwal, Umang; Didelija, Inka C; Yuan, Yang; Wang, Xiaoying; Marini, Juan C

    2017-01-01

    Background: Arginine is considered to be an essential amino acid in various (patho)physiologic conditions of high demand. However, dietary arginine supplementation suffers from various drawbacks, including extensive first-pass extraction. Citrulline supplementation may be a better alternative than arginine, because its only fate in vivo is conversion into arginine. Objective: The goal of the present research was to determine the relative efficiency of arginine and citrulline supplementation to improve arginine availability. Methods: Six-week-old C57BL/6J male mice fitted with gastric catheters were adapted to 1 of 7 experimental diets for 2 wk. The basal diet contained 2.5 g l-arginine/kg, whereas the supplemented diets contained an additional 2.5, 7.5, and 12.5 g/kg diet of either l-arginine or l-citrulline. On the final day, after a 3-h food deprivation, mice were continuously infused intragastrically with an elemental diet similar to the dietary treatment, along with l-[13C6]arginine, to determine the splanchnic first-pass metabolism (FPM) of arginine. In addition, tracers were continuously infused intravenously to determine the fluxes and interconversions between citrulline and arginine. Linear regression slopes were compared to determine the relative efficiency of each supplement. Results: Whereas all the supplemented citrulline (105% ± 7% SEM) appeared in plasma and resulted in a marginal increase of 86% in arginine flux, supplemental arginine underwent an ∼70% FPM, indicating that only 30% of the supplemental arginine entered the peripheral circulation. However, supplemental arginine did not increase arginine flux. Both supplements linearly increased (P < 0.01) plasma arginine concentration from 109 μmol/L for the basal diet to 159 and 214 μmol/L for the highest arginine and citrulline supplementation levels, respectively. However, supplemental citrulline increased arginine concentrations to a greater extent (35%, P < 0.01). Conclusions: Citrulline supplementation is more efficient at increasing arginine availability than is arginine supplementation itself in mice. PMID:28179487

  20. Experimental study on foam coverage on simulated longwall roof.

    PubMed

    Reed, W R; Zheng, Y; Klima, S; Shahan, M R; Beck, T W

    2017-01-01

    Testing was conducted to determine the ability of foam to maintain roof coverage in a simulated longwall mining environment. Approximately 27 percent of respirable coal mine dust can be attributed to longwall shield movement, and developing controls for this dust source has been difficult. The application of foam is a possible dust control method for this source. Laboratory testing of two foam agents was conducted to determine the ability of the foam to adhere to a simulated longwall face roof surface. Two different foam generation methods were used: compressed air and blower air. Using a new imaging technology, image processing and analysis utilizing ImageJ software produced quantifiable results of foam roof coverage. For compressed air foam in 3.3 m/s (650 fpm) ventilation, 98 percent of agent A was intact while 95 percent of agent B was intact on the roof at three minutes after application. At 30 minutes after application, 94 percent of agent A was intact while only 20 percent of agent B remained. For blower air in 3.3 m/s (650 fpm) ventilation, the results were dependent upon nozzle type. Three different nozzles were tested. At 30 min after application, 74 to 92 percent of foam agent A remained, while 3 to 50 percent of foam agent B remained. Compressed air foam seems to remain intact for longer durations and is easier to apply than blower air foam. However, more water drained from the foam when using compressed air foam, which demonstrates that blower air foam retains more water at the roof surface. Agent A seemed to be the better performer as far as roof application is concerned. This testing demonstrates that roof application of foam is feasible and is able to withstand a typical face ventilation velocity, establishing this technique's potential for longwall shield dust control.

  1. Experimental study on foam coverage on simulated longwall roof

    PubMed Central

    Reed, W.R.; Zheng, Y.; Klima, S.; Shahan, M.R.; Beck, T.W.

    2018-01-01

    Testing was conducted to determine the ability of foam to maintain roof coverage in a simulated longwall mining environment. Approximately 27 percent of respirable coal mine dust can be attributed to longwall shield movement, and developing controls for this dust source has been difficult. The application of foam is a possible dust control method for this source. Laboratory testing of two foam agents was conducted to determine the ability of the foam to adhere to a simulated longwall face roof surface. Two different foam generation methods were used: compressed air and blower air. Using a new imaging technology, image processing and analysis utilizing ImageJ software produced quantifiable results of foam roof coverage. For compressed air foam in 3.3 m/s (650 fpm) ventilation, 98 percent of agent A was intact while 95 percent of agent B was intact on the roof at three minutes after application. At 30 minutes after application, 94 percent of agent A was intact while only 20 percent of agent B remained. For blower air in 3.3 m/s (650 fpm) ventilation, the results were dependent upon nozzle type. Three different nozzles were tested. At 30 min after application, 74 to 92 percent of foam agent A remained, while 3 to 50 percent of foam agent B remained. Compressed air foam seems to remain intact for longer durations and is easier to apply than blower air foam. However, more water drained from the foam when using compressed air foam, which demonstrates that blower air foam retains more water at the roof surface. Agent A seemed to be the better performer as far as roof application is concerned. This testing demonstrates that roof application of foam is feasible and is able to withstand a typical face ventilation velocity, establishing this technique’s potential for longwall shield dust control. PMID:29563765

  2. Characterization of Ovarian Steroid Patterns in Female African Lions (Panthera leo), and the Effects of Contraception on Reproductive Function

    PubMed Central

    Putman, Sarah B.; Brown, Janine L.; Franklin, Ashley D.; Schneider, Emily C.; Boisseau, Nicole P.; Asa, Cheryl S.; Pukazhenthi, Budhan S.

    2015-01-01

    Because of poor reproduction after the lifting of an 8-year breeding moratorium, a biomedical survey of female lions in U.S. zoos was initiated in 2007. Fecal estrogen (FEM), progestagen (FPM) and glucocorticoid (FGM) metabolites were analyzed in samples collected 3–4 times per wk from 28 lions at 17 facilities (0.9–13.8 yr of age) for 4 mo—3.5 yr and body weights were obtained ~monthly from 17 animals at eight facilities (0.0–3.0 yr of age). Based on FEM, estrous cycle length averaged 17.5 ± 0.4 d in duration, with estrus lasting 4.4 ± 0.2 d. All but one female exhibited waves of estrogenic activity indicative of follicular activity; however, not all females expressed estrous behaviors (73%), suggesting silent estrus was common. Female lions experienced puberty earlier than expected; waves of estrogenic activity were observed as young as 1.1 yr of age, which may be related to a faster growth rate of captive vs. wild lions. Mean gestation length was 109.5 ± 1.0 d, whereas the non-pregnant luteal phase was less than half (46.0 ± 1.2 d). Non-mating induced increases in FPM were observed in 33% of females housed without a male, consistent with spontaneous ovulation. A number of study animals had been contracepted, and the return to cyclicity after treatment withdrawal, while variable, was ~4.0 yr and longer than the 1-yr expected efficacy, especially for those implanted with Suprelorin. For FGM, there were no differences in overall, baseline or peak mean concentrations among the age groups or across seasons, nor were there any relationships between reproductive parameters and FGM concentrations. Overall, results suggest that poor reproduction in lions after the breeding moratorium was not related to altered adrenal or ovarian steroid activity, but for some females may have been a consequence of individual institutions’ management decisions. PMID:26460849

  3. Long-term liquid storage and reproductive evaluation of an innovative boar semen extender (Formula12®) containing a non-reducing disaccharide and an enzymatic agent.

    PubMed

    Bresciani, Carla; Bianchera, Annalisa; Bettini, Ruggero; Buschini, Annamaria; Marchi, Laura; Cabassi, Clotilde Silvia; Sabbioni, Alberto; Righi, Federico; Mazzoni, Claudio; Parmigiani, Enrico

    2017-05-01

    There are no reports of saccharolytic enzymes being used in the preparation of formulations for animal semen extenders. In the present study, the use of an innovative semen extender (Formula12 ® ) in the long-term liquid storage of boar semen at 17°C was evaluated. The formulation included use of a disaccharide (sucrose) as the energy source precursor coupled to an enzymatic agent (invertase). The innovative extender was evaluated and compared in vitro to a commercial extender (Vitasem LD ® ) for the following variables: Total Motility (TM), Forward Progressive Motility (FPM), sperm morphology, membrane integrity, acrosome integrity, and chromatin instability. Boar sperm diluted in Formula12 ® and stored for 12 days at 17°C maintained a commercially acceptable FPM (>70%). Using the results from the in vitro study, an AI field trial was performed. A total of 170 females were inseminated (135 with Formula12 ® and 35 with Vitasem LD ® ). The pregnancy rates were 97.8% compared with 91.4%, and the farrowing rates were 96.3% compared with 88.6% when Formula12 ® and Vitasem LD ® were used, respectively. The mean number of piglets born/sow were 14.92±0.46 compared with 13.83±0.70, and the number of piglets born alive/sow were 14.07±0.46 compared with 12.12±0.70 (P<0.05). The results obtained in this study demonstrated that use of the innovative concept to provide a precursor of glucose and fructose as energy sources for an enzymatic agent in an extender allowed for meeting the metabolic requirements of boar sperm during storage at 17°C. It is suggested that there was a beneficial effect on fertilizing capacity of boar sperm in the female reproductive tract with use of these technologies. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. TU-F-18C-09: Mammogram Surveillance Using Texture Analysis for Breast Cancer Patients After Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuo, H; Tome, W; FOX, J

    2014-06-15

    Purpose: To study the feasibility of applying cancer risk model established from treated patients to predict the risk of recurrence on follow-up mammography after radiation therapy for both ipsilateral and contralateral breast. Methods: An extensive set of textural feature functions was applied to a set of 196 Mammograms from 50 patients. 56 Mammograms from 28 patients were used as training set, 44 mammograms from 22 patients were used as test set and the rest were used for prediction. Feature functions include Histogram, Gradient, Co-Occurrence Matrix, Run-Length Matrix and Wavelet Energy. An optimum subset of the feature functions was selected bymore » Fisher Coefficient (FO) or Mutual Information (MI) (up to top 10 features) or a method combined FO, MI and Principal Component (FMP) (up to top 30 features). One-Nearest Neighbor (1-NN), Linear Discriminant Analysis (LDA) and Nonlinear Discriminant Analysis (NDA) were utilized to build a risk model of breast cancer from the training set of mammograms at the time of diagnosis. The risk model was then used to predict the risk of recurrence from mammogram taken one year and three years after RT. Results: FPM with NDA has the best classification power in classifying the training set of the mammogram with lesions versus those without lesions. The model of FPM with NDA achieved a true positive (TP) rate of 82% compared to 45.5% of using FO with 1-NN. The best false positive (FP) rates were 0% and 3.6% in contra-lateral breast of 1-year and 3-years after RT, and 10.9% in ipsi-lateral breast of 3-years after RT. Conclusion: Texture analysis offers high dimension to differentiate breast tissue in mammogram. Using NDA to classify mammogram with lesion from mammogram without lesion, it can achieve rather high TP and low FP in the surveillance of mammogram for patient with conservative surgery combined RT.« less

  5. Characterization of Ovarian Steroid Patterns in Female African Lions (Panthera leo), and the Effects of Contraception on Reproductive Function.

    PubMed

    Putman, Sarah B; Brown, Janine L; Franklin, Ashley D; Schneider, Emily C; Boisseau, Nicole P; Asa, Cheryl S; Pukazhenthi, Budhan S

    2015-01-01

    Because of poor reproduction after the lifting of an 8-year breeding moratorium, a biomedical survey of female lions in U.S. zoos was initiated in 2007. Fecal estrogen (FEM), progestagen (FPM) and glucocorticoid (FGM) metabolites were analyzed in samples collected 3-4 times per wk from 28 lions at 17 facilities (0.9-13.8 yr of age) for 4 mo-3.5 yr and body weights were obtained ~monthly from 17 animals at eight facilities (0.0-3.0 yr of age). Based on FEM, estrous cycle length averaged 17.5 ± 0.4 d in duration, with estrus lasting 4.4 ± 0.2 d. All but one female exhibited waves of estrogenic activity indicative of follicular activity; however, not all females expressed estrous behaviors (73%), suggesting silent estrus was common. Female lions experienced puberty earlier than expected; waves of estrogenic activity were observed as young as 1.1 yr of age, which may be related to a faster growth rate of captive vs. wild lions. Mean gestation length was 109.5 ± 1.0 d, whereas the non-pregnant luteal phase was less than half (46.0 ± 1.2 d). Non-mating induced increases in FPM were observed in 33% of females housed without a male, consistent with spontaneous ovulation. A number of study animals had been contracepted, and the return to cyclicity after treatment withdrawal, while variable, was ~4.0 yr and longer than the 1-yr expected efficacy, especially for those implanted with Suprelorin. For FGM, there were no differences in overall, baseline or peak mean concentrations among the age groups or across seasons, nor were there any relationships between reproductive parameters and FGM concentrations. Overall, results suggest that poor reproduction in lions after the breeding moratorium was not related to altered adrenal or ovarian steroid activity, but for some females may have been a consequence of individual institutions' management decisions.

  6. A Procedure to Edit Deep-Towed Navigation Data

    DTIC Science & Technology

    2003-02-28

    K &-Ki. yjsFg_XMR R]bFP £ X_Y UW^`w Y[k�jX_w`[\\RTXZ^`Y�d gZP =J�NtI )��XMRTb )" %$�$I�s s... gZP X_ vJ  XZU] RJ�FP f6R]UT[iSR�RTbFPeU]Pg_PS�[\\Y=R�YF[�jXZw/[\\RTXZ^`Yu[\\RT[ a$U]^`V R]bFPy£ X_Yca$UW^`w d gZP � � JML`J o...34,+8E$>/H(. /*0D$;",+8E$>=?.8"I0*/ bFPm¡¢[w`wP1u�d6fcP [UWPlXZY R]bFPld gZP %$ [YFu�[\\U]PmU]P1V^�`Pu�V

  7. Radiation resistance of elastomeric O-rings in mixed neutron and gamma fields: Testing methodology and experimental results

    NASA Astrophysics Data System (ADS)

    Zenoni, A.; Bignotti, F.; Donzella, A.; Donzella, G.; Ferrari, M.; Pandini, S.; Andrighetto, A.; Ballan, M.; Corradetti, S.; Manzolaro, M.; Monetti, A.; Rossignoli, M.; Scarpa, D.; Alloni, D.; Prata, M.; Salvini, A.; Zelaschi, F.

    2017-11-01

    Materials and components employed in the presence of intense neutron and gamma fields are expected to absorb high dose levels that may induce deep modifications of their physical and mechanical properties, possibly causing loss of their function. A protocol for irradiating elastomeric materials in reactor mixed neutron and gamma fields and for testing the evolution of their main mechanical and physical properties with absorbed dose has been developed. Four elastomeric compounds used for vacuum O-rings, one fluoroelastomer polymer (FPM) based and three ethylene propylene diene monomer rubber (EPDM) based, presently available on the market have been selected for the test. One EPDM is rated as radiation resistant in gamma fields, while the other elastomers are general purpose products. Particular care has been devoted to dosimetry calculations, since absorbed dose in neutron fields, unlike pure gamma fields, is strongly dependent on the material composition and, in particular, on the hydrogen content. The products have been tested up to about 2 MGy absorbed dose. The FPM based elastomer, in spite of its lower dose absorption in fast neutron fields, features the largest variations of properties, with a dramatic increase in stiffness and brittleness. Out of the three EPDM based compounds, one shows large and rapid changes in the main mechanical properties, whereas the other two feature more stable behaviors. The performance of the EPDM rated as radiation resistant in pure gamma fields does not appear significantly better than that of the standard product. The predictive capability of the accelerated irradiation tests performed as well as the applicable concepts of threshold of radiation damage is discussed in view of the use of the examined products in the selective production of exotic species facility, now under construction at the Legnaro National Laboratories of the Italian Istituto Nazionale di Fisica Nucleare. It results that a careful account of dose rate effects and oxygen penetration in the material, both during test irradiations and in operating conditions, is needed to obtain reliable predictions.

  8. Radiation resistance of elastomeric O-rings in mixed neutron and gamma fields: Testing methodology and experimental results.

    PubMed

    Zenoni, A; Bignotti, F; Donzella, A; Donzella, G; Ferrari, M; Pandini, S; Andrighetto, A; Ballan, M; Corradetti, S; Manzolaro, M; Monetti, A; Rossignoli, M; Scarpa, D; Alloni, D; Prata, M; Salvini, A; Zelaschi, F

    2017-11-01

    Materials and components employed in the presence of intense neutron and gamma fields are expected to absorb high dose levels that may induce deep modifications of their physical and mechanical properties, possibly causing loss of their function. A protocol for irradiating elastomeric materials in reactor mixed neutron and gamma fields and for testing the evolution of their main mechanical and physical properties with absorbed dose has been developed. Four elastomeric compounds used for vacuum O-rings, one fluoroelastomer polymer (FPM) based and three ethylene propylene diene monomer rubber (EPDM) based, presently available on the market have been selected for the test. One EPDM is rated as radiation resistant in gamma fields, while the other elastomers are general purpose products. Particular care has been devoted to dosimetry calculations, since absorbed dose in neutron fields, unlike pure gamma fields, is strongly dependent on the material composition and, in particular, on the hydrogen content. The products have been tested up to about 2 MGy absorbed dose. The FPM based elastomer, in spite of its lower dose absorption in fast neutron fields, features the largest variations of properties, with a dramatic increase in stiffness and brittleness. Out of the three EPDM based compounds, one shows large and rapid changes in the main mechanical properties, whereas the other two feature more stable behaviors. The performance of the EPDM rated as radiation resistant in pure gamma fields does not appear significantly better than that of the standard product. The predictive capability of the accelerated irradiation tests performed as well as the applicable concepts of threshold of radiation damage is discussed in view of the use of the examined products in the selective production of exotic species facility, now under construction at the Legnaro National Laboratories of the Italian Istituto Nazionale di Fisica Nucleare. It results that a careful account of dose rate effects and oxygen penetration in the material, both during test irradiations and in operating conditions, is needed to obtain reliable predictions.

  9. DMD-based quantitative phase microscopy and optical diffraction tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie

    2018-02-01

    Digital micromirror devices (DMDs), which offer high speed and high degree of freedoms in steering light illuminations, have been increasingly applied to optical microscopy systems in recent years. Lately, we introduced DMDs into digital holography to enable new imaging modalities and break existing imaging limitations. In this paper, we will first present our progress in using DMDs for demonstrating laser-illumination Fourier ptychographic microscopy (FPM) with shotnoise limited detection. After that, we will present a novel common-path quantitative phase microscopy (QPM) system based on using a DMD. Building on those early developments, a DMD-based high speed optical diffraction tomography (ODT) system has been recently demonstrated, and the results will also be presented. This ODT system is able to achieve video-rate 3D refractive-index imaging, which can potentially enable observations of high-speed 3D sample structural changes.

  10. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  11. Method of and apparatus for determining deposition-point temperature

    DOEpatents

    Mansure, A.J.; Spates, J.J.; Martin, S.J.

    1998-10-27

    Acoustic-wave sensor apparatus and method are disclosed for analyzing a normally liquid petroleum-based composition for monitoring deposition-point temperature. The apparatus includes at least one acoustic-wave device such as SAW, QCM, FPM, TSM or APM type devices in contact with the petroleum-based composition for sensing or detecting the surface temperature at which deposition occurs and/or rate of deposition as a function of temperature by sensing an accompanying change in frequency, phase shift, damping voltage or damping current of an electrical oscillator to a known calibrated condition. The acoustic wave device is actively cooled to monitor the deposition of constituents such as paraffins by determining the point at which solids from the liquid composition begin to form on the acoustic wave device. The acoustic wave device can be heated to melt or boil off the deposits to reset the monitor and the process can be repeated. 5 figs.

  12. Method of and apparatus for determining deposition-point temperature

    DOEpatents

    Mansure, Arthur J.; Spates, James J.; Martin, Stephen J.

    1998-01-01

    Acoustic-wave sensor apparatus and method for analyzing a normally liquid petroleum-based composition for monitoring deposition-point temperature. The apparatus includes at least one acoustic-wave device such as SAW, QCM, FPM, TSM or APM type devices in contact with the petroleum-based composition for sensing or detecting the surface temperature at which deposition occurs and/or rate of deposition as a function of temperature by sensing an accompanying change in frequency, phase shift, damping voltage or damping current of an electrical oscillator to a known calibrated condition. The acoustic wave device is actively cooled to monitor the deposition of constituents such as paraffins by determining the point at which solids from the liquid composition begin to form on the acoustic wave device. The acoustic wave device can be heated to melt or boil off the deposits to reset the monitor and the process can be repeated.

  13. Comparison of the impact of the Tobacco Heating System 2.2 and a cigarette on indoor air quality.

    PubMed

    Mitova, Maya I; Campelos, Pedro B; Goujon-Ginglinger, Catherine G; Maeder, Serge; Mottier, Nicolas; Rouget, Emmanuel G R; Tharin, Manuel; Tricker, Anthony R

    2016-10-01

    The impact of the Tobacco Heating System 2.2 (THS 2.2) on indoor air quality was evaluated in an environmentally controlled room using ventilation conditions recommended for simulating "Office", "Residential" and "Hospitality" environments and was compared with smoking a lit-end cigarette (Marlboro Gold) under identical experimental conditions. The concentrations of eighteen indoor air constituents (respirable suspended particles (RSP) < 2.5 μm in diameter), ultraviolet particulate matter (UVPM), fluorescent particulate matter (FPM), solanesol, 3-ethenylpyridine, nicotine, 1,3-butadiene, acrylonitrile, benzene, isoprene, toluene, acetaldehyde, acrolein, crotonaldehyde, formaldehyde, carbon monoxide, nitrogen oxide, and combined oxides of nitrogen) were measured. In simulations evaluating THS 2.2, the concentrations of most studied analytes did not exceed the background concentrations determined when non-smoking panelists were present in the environmentally controlled room under equivalent conditions. Only acetaldehyde and nicotine concentrations were increased above background concentrations in the "Office" (3.65 and 1.10 μg/m(3)), "Residential" (5.09 and 1.81 μg/m(3)) and "Hospitality" (1.40 and 0.66 μg/m(3)) simulations, respectively. Smoking Marlboro Gold resulted in greater increases in the concentrations of acetaldehyde (58.8, 83.8 and 33.1 μg/m(3)) and nicotine (34.7, 29.1 and 34.6 μg/m(3)) as well as all other measured indoor air constituents in the "Office", "Residential" and "Hospitality" simulations, respectively. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  15. Methane emissions and airflow patterns on a longwall face: Potential influences from longwall gob permeability distributions on a bleederless longwall panel.

    PubMed

    Schatzel, S J; Krog, R B; Dougherty, H

    2017-01-01

    Longwall face ventilation is an important component of the overall coal mine ventilation system. Increased production rates due to higher-capacity mining equipment tend to also increase methane emission rates from the coal face, which must be diluted by the face ventilation. Increases in panel length, with some mines exceeding 6,100 m (20,000 ft), and panel width provide additional challenges to face ventilation designs. To assess the effectiveness of current face ventilation practices at a study site, a face monitoring study with continuous monitoring of methane concentrations and automated recording of longwall shearer activity was combined with a tracer gas test on a longwall face. The study was conducted at a U.S. longwall mine operating in a thick, bituminous coal seam and using a U-type, bleederless ventilation system. Multiple gob gas ventholes were located near the longwall face. These boreholes had some unusual design concepts, including a system of manifolds to modify borehole vacuum and flow and completion depths close to the horizon of the mined coalbed that enabled direct communication with the mine atmosphere. The mine operator also had the capacity to inject nitrogen into the longwall gob, which occurred during the monitoring study. The results show that emission rates on the longwall face showed a very limited increase in methane concentrations from headgate to tailgate despite the occurrence of methane delays during monitoring. Average face air velocities were 3.03 m/s (596 fpm) at shield 57 and 2.20 m/s (433 fpm) at shield 165. The time required for the sulfur hexafluoride (SF 6 ) peak to occur at each monitoring location has been interpreted as being representative of the movement of the tracer slug. The rate of movement of the slug was much slower in reaching the first monitoring location at shield 57 compared with the other face locations. This lower rate of movement, compared with the main face ventilation, is thought to be the product of a flow path within and behind the shields that is moving in the general direction of the headgate to the tailgate. Barometric pressure variations were pronounced over the course of the study and varied on a diurnal basis.

  16. Inhibition of human alcohol and aldehyde dehydrogenases by acetaminophen: Assessment of the effects on first-pass metabolism of ethanol.

    PubMed

    Lee, Yung-Pin; Liao, Jian-Tong; Cheng, Ya-Wen; Wu, Ting-Lun; Lee, Shou-Lun; Liu, Jong-Kang; Yin, Shih-Jiun

    2013-11-01

    Acetaminophen is one of the most widely used over-the-counter analgesic, antipyretic medications. Use of acetaminophen and alcohol are commonly associated. Previous studies showed that acetaminophen might affect bioavailability of ethanol by inhibiting gastric alcohol dehydrogenase (ADH). However, potential inhibitions by acetaminophen of first-pass metabolism (FPM) of ethanol, catalyzed by the human ADH family and by relevant aldehyde dehydrogenase (ALDH) isozymes, remain undefined. ADH and ALDH both exhibit racially distinct allozymes and tissue-specific distribution of isozymes, and are principal enzymes responsible for ethanol metabolism in humans. In this study, we investigated acetaminophen inhibition of ethanol oxidation with recombinant human ADH1A, ADH1B1, ADH1B2, ADH1B3, ADH1C1, ADH1C2, ADH2, and ADH4, and inhibition of acetaldehyde oxidation with recombinant human ALDH1A1 and ALDH2. The investigations were done at near physiological pH 7.5 and with a cytoplasmic coenzyme concentration of 0.5 mM NAD(+). Acetaminophen acted as a noncompetitive inhibitor for ADH enzymes, with the slope inhibition constants (Kis) ranging from 0.90 mM (ADH2) to 20 mM (ADH1A), and the intercept inhibition constants (Kii) ranging from 1.4 mM (ADH1C allozymes) to 19 mM (ADH1A). Acetaminophen exhibited noncompetitive inhibition for ALDH2 (Kis = 3.0 mM and Kii = 2.2 mM), but competitive inhibition for ALDH1A1 (Kis = 0.96 mM). The metabolic interactions between acetaminophen and ethanol/acetaldehyde were assessed by computer simulation using inhibition equations and the determined kinetic constants. At therapeutic to subtoxic plasma levels of acetaminophen (i.e., 0.2-0.5 mM) and physiologically relevant concentrations of ethanol (10 mM) and acetaldehyde (10 μm) in target tissues, acetaminophen could inhibit ADH1C allozymes (12-26%) and ADH2 (14-28%) in the liver and small intestine, ADH4 (15-31%) in the stomach, and ALDH1A1 (16-33%) and ALDH2 (8.3-19%) in all 3 tissues. The results suggest that inhibition by acetaminophen of hepatic and gastrointestinal FPM of ethanol through ADH and ALDH pathways might become significant at higher, subtoxic levels of acetaminophen. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. ETS levels in hospitality environments satisfying ASHRAE standard 62-1989: "ventilation for acceptable indoor air quality"

    NASA Astrophysics Data System (ADS)

    Moschandreas, D. J.; Vuilleumier, K. L.

    Prior to this study, indoor air constituent levels and ventilation rates of hospitality environments had not been measured simultaneously. This investigation measured indoor Environmental Tobacco Smoke-related (ETS-related) constituent levels in two restaurants, a billiard hall and a casino. The objective of this study was to characterize ETS-related constituent levels inside hospitality environments when the ventilation rates satisfy the requirements of the ASHRAE 62-1989 Ventilation Standard. The ventilation rate of each selected hospitality environment was measured and adjusted. The study advanced only if the requirements of the ASHRAE 62-1989 Ventilation Standard - the pertinent standard of the American Society of Heating, Refrigeration and Air Conditioning Engineers - were satisfied. The supply rates of outdoor air and occupant density were measured intermittently to assure that the ventilation rate of each facility satisfied the standard under occupied conditions. Six ETS-related constituents were measured: respirable suspended particulate (RSP) matter, fluorescent particulate matter (FPM, an estimate of the ETS particle concentrations), ultraviolet particulate matter (UVPM, a second estimate of the ETS particle concentrations), solanesol, nicotine and 3-ethenylpyridine (3-EP). ETS-related constituent levels in smoking sections, non-smoking sections and outdoors were sampled daily for eight consecutive days at each hospitality environment. This study found that the difference between the concentrations of ETS-related constituents in indoor smoking and non-smoking sections was statistically significant. Differences between indoor non-smoking sections and outdoor ETS-related constituent levels were identified but were not statistically significant. Similarly, differences between weekday and weekend evenings were identified but were not statistically significant. The difference between indoor smoking sections and outdoors was statistically significant. Most importantly, ETS-related constituent concentrations measured indoors did not exceed existing occupational standards. It was concluded that if the measured ventilation rates of the sampled facilities satisfied the ASHRAE 62-1989 Ventilation Standard requirements, the corresponding ETS-related constituents were measured at concentrations below known harmful levels as specified by the American Conference of Governmental Industrial Hygiene (ACGIH).

  18. Novel psychoactive substances: overdose of 3-fluorophenmetrazine (3-FPM) and etizolam in a 33-year-old man.

    PubMed

    Benesch, Matthew G K; Iqbal, Sahar J

    2018-06-08

    Though illegal in the UK, in many countries novel psychoactive substances are quasi-legal synthetic compounds that are widely available online under the guise of research chemicals. These substances are relatively cheap and are often undetectable in standard drug screens. Nearly 200 such compounds are introduced yearly, and little is usually known about their metabolism or physiological effects. Consequently, managing patients in overdose situations on largely unknown substances usually involves supportive care, however anticipating and managing atypical side effects are challenging in the absence of knowledge of these compounds. In this report, we discuss our encounter with a 33-year-old unconscious man presenting with coingestion of a novel stimulant 3-fluorophenmetrazine with a rarely used benzodiazepine etizolam. This patient developed seizure-like activity and delayed widespread T-wave inversions, both of which ultimately resolved without sequelae. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    NASA Astrophysics Data System (ADS)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  20. A Summer Math and Physics Program for High School Students: Student Performance and Lessons Learned in the Second Year

    NASA Astrophysics Data System (ADS)

    Timme, Nicholas; Baird, Michael; Bennett, Jake; Fry, Jason; Garrison, Lance; Maltese, Adam

    2013-05-01

    For the past two years, the Foundations in Physics and Mathematics (FPM) summer program has been held at Indiana University in order to fulfill two goals: provide additional physics and mathematics instruction at the high school level, and provide physics graduate students with experience and autonomy in designing curricula and teaching courses. In this paper we will detail changes made to the program for its second year and the motivation for these changes, as well as implications for future iterations of the program. We gauge the impact of the changes on student performance using pre-/post-test scores, student evaluations, and anecdotal evidence. These data show that the program has a positive impact on student knowledge and this impact was greater in magnitude in the second year of the program. We attribute this improvement primarily to the inclusion of more inquiry-driven activities. All activities, worksheets, and lesson plans used in the program are available online.

  1. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  2. [Sexual and reproductive health in university students at an institution of higher learning in Colombia].

    PubMed

    Gómez-Camargo, Doris E; Ochoa-Diaz, Margarita M; Canchila-Barrios, Carlos A; Ramos-Clason, Enrique C; Salguedo-Madrid, Germán I; Malambo-García, Dacia I

    2014-01-01

    To investigate the state of sexual and reproductive health in students at a public university in the Colombian Caribbean, with an emphasis on sexually transmitted diseases (STDs), fertility, sexuality, pregnancy and violence. Cross-sectional survey study. University students, enrolled in the second semester of 2010 and who completed a self-administered survey based on the Reproductive Health survey of the Pan American Health Organization, were selected. Qualitative data was tabulated and graphed using measures of central tendency for quantitative variables. The age of population studied was around 20 years old, came from the urban area (57.9 %; IC95 %=54.7-61.1), was predominantly heterosexual (89.7 %), with an age of initiation of sexual activity of less than 18 years old, 11.8 % promiscuity, mainly using the condom as a Family Planning Method (FPM) (55 %). Although they had prior information on sexual health, STDs and FPMs, they did not behave according to this due to low education about HIV transmission routes, low incidence of serological tests for STDs, and high risk behavior (sex/alcohol/drugs). It was observed that 12.3 % had a history of pregnancy, physical violence (21.6 %) and sexual violence (4.6 %) with a predominant silence from the victims of sexual abuse (61.8 %). The sample reflects the student population in this region of Colombia. We plan to organize a health program with medical and psychological support to reduce the rates of STDs and unplanned pregnancies, preparing the adolescent for this important step in their life and serving as a model for other Latin American universities.

  3. Application of magnetic resonance imaging to the investigation of the diffusivity of 1,1,1,2-tetrafluorethane in two polymers.

    PubMed

    Mayele, M; Oellrich, L R

    2004-03-01

    In order to evaluate the suitability of a polymer as a sealing material for certain working fluids used in process plants, information about the fluid diffusivity into the polymer or the polymer permeability to the fluid is a prerequisite. The fluid of interest in the present work is 1,1,1,2-tetrafluorethane, CH(2)FCF(3), a partly fluorinated hydrocarbon (HFC) commonly known as refrigerant R134a. HFCs are increasingly used in refrigeration, air conditioning, and heat pump applications as substitutes for the chlorofluorocarbons (CFCs) or hydrochlorofluorocarbons (HCFCs) that are believed to be responsible for ozone depletion in the stratosphere. The polymers studied were FPM, a perfluoroelastomer, and EPDM, an ethylene-propylene-diene rubber. The study was carried out using magnetic resonance imaging (MRI). The contact time dependence of diffusion of the fluid into the polymer, as well as the spatial distributions of spin-lattice, T(1), and spin-spin, T(2), relaxation times, were used as indicators of the influence of the EPDM matrix on the mobility of R134a molecules.

  4. Quiet Clean Short-haul Experimental Engine (QCSEE) main reduction gears test program

    NASA Technical Reports Server (NTRS)

    Misel, O. W.

    1977-01-01

    Sets of under the wing (UTW) engine reduction gears and sets of over the wing (OTW) engine reduction gears were fabricated for rig testing and subsequent installation in engines. The UTW engine reduction gears which have a ratio of 2.465:1 and a design rating of 9712 kW at 3157 rpm fan speed were operated at up to 105% speed at 60% torque and 100% speed at 125% torque. The OTW engine reduction gears which have a ratio of 2.062:1 and a design rating of 12,615 kW at 3861 rpm fan speed were operated at up to 95% speed at 50% torque and 80% speed at 109% torque. Satisfactory operation was demonstrated at powers up to 12,172 kW, mechanical efficiency up to 99.1% UTW, and a maximum gear pitch line velocity of 112 m/s (22,300 fpm) with a corresponding star gear spherical roller bearing DN of 850,00 OTW. Oil and star gear bearing temperatures, oil churning, heat rejection, and vibratory characteristics were acceptable for engine installation.

  5. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  6. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  7. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    PubMed

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

  8. Quantum probability rule: a generalization of the theorems of Gleason and Busch

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Cresser, James D.; Jeffers, John; Pegg, David T.

    2014-04-01

    Busch's theorem deriving the standard quantum probability rule can be regarded as a more general form of Gleason's theorem. Here we show that a further generalization is possible by reducing the number of quantum postulates used by Busch. We do not assume that the positive measurement outcome operators are effects or that they form a probability operator measure. We derive a more general probability rule from which the standard rule can be obtained from the normal laws of probability when there is no measurement outcome information available, without the need for further quantum postulates. Our general probability rule has prediction-retrodiction symmetry and we show how it may be applied in quantum communications and in retrodictive quantum theory.

  9. Probability in the Many-Worlds Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Vaidman, Lev

    It is argued that, although in the Many-Worlds Interpretation of quantum mechanics there is no "probability" for an outcome of a quantum experiment in the usual sense, we can understand why we have an illusion of probability. The explanation involves: (a) A "sleeping pill" gedanken experiment which makes correspondence between an illegitimate question: "What is the probability of an outcome of a quantum measurement?" with a legitimate question: "What is the probability that `I' am in the world corresponding to that outcome?"; (b) A gedanken experiment which splits the world into several worlds which are identical according to some symmetry condition; and (c) Relativistic causality, which together with (b) explain the Born rule of standard quantum mechanics. The Quantum Sleeping Beauty controversy and "caring measure" replacing probability measure are discussed.

  10. Conditional Probabilities and Collapse in Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Laura, Roberto; Vanni, Leonardo

    2008-09-01

    We show that including both the system and the apparatus in the quantum description of the measurement process, and using the concept of conditional probabilities, it is possible to deduce the statistical operator of the system after a measurement with a given result, which gives the probability distribution for all possible consecutive measurements on the system. This statistical operator, representing the state of the system after the first measurement, is in general not the same that would be obtained using the postulate of collapse.

  11. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  12. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  13. Liquid Dynamics from Spacelab to Sloshsat

    NASA Astrophysics Data System (ADS)

    Vreeburg, Jan P. B.

    2009-01-01

    The European participation in manned spaceflight had a strong impact on research in the natural sciences because weightlessness became available as experimental condition. Preparation for Spacelab required many decisions on organization, funding and allocation of resources. Lessons were learned from results obtained in precursors like Skylab or in unmanned programs such as TEXUS. ESA with scientists from the major disciplines instituted Working Groups that acted as consultant bodies. European experiment hardware has been realized by industry using specifications and not, traditionally, by evolution in a laboratory. The development of the Fluid Physics Module preceded many instruments for liquid research in space. The training of Payload Specialists for the operation of the FPM included theory of fluids and laboratory instruction. The dynamics of spacecraft with a partially filled tank can be studied in weightlessness only. Observation of the liquid behaviour inside the tank is a challenging problem but the momentum of the rigid part of the spacecraft can be tracked accurately. Analytical expressions for transient liquid flow in a moving tank should be identified, together with the tank motion. A validated model of liquid momentum transfer during spacecraft manoeuvres will make many missions more efficient and less costly. Sloshsat FLEVO was flown to provide reference data for this purpose.

  14. Characteristic Functional of a Probability Measure Absolutely Continuous with Respect to a Gaussian Radon Measure

    DTIC Science & Technology

    1984-08-01

    12. PERSONAL AUTHORISI Hiroshi Sato 13* TYPE OF REPORT TECHNICAL 13b. TIME COVERED PROM TO 14. OATE OF REPORT (Yr. Mo., Day) Aug. 1984...nectuary and identify by bloc* number) Let p and p.. be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1...THIS PAGE ABSTRACT Let y and y1 be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1] posed the

  15. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  16. THE SEMIGROUP OF METRIC MEASURE SPACES AND ITS INFINITELY DIVISIBLE PROBABILITY MEASURES

    PubMed Central

    EVANS, STEVEN N.; MOLCHANOV, ILYA

    2015-01-01

    A metric measure space is a complete, separable metric space equipped with a probability measure that has full support. Two such spaces are equivalent if they are isometric as metric spaces via an isometry that maps the probability measure on the first space to the probability measure on the second. The resulting set of equivalence classes can be metrized with the Gromov–Prohorov metric of Greven, Pfaffelhuber and Winter. We consider the natural binary operation ⊞ on this space that takes two metric measure spaces and forms their Cartesian product equipped with the sum of the two metrics and the product of the two probability measures. We show that the metric measure spaces equipped with this operation form a cancellative, commutative, Polish semigroup with a translation invariant metric. There is an explicit family of continuous semicharacters that is extremely useful for, inter alia, establishing that there are no infinitely divisible elements and that each element has a unique factorization into prime elements. We investigate the interaction between the semigroup structure and the natural action of the positive real numbers on this space that arises from scaling the metric. For example, we show that for any given positive real numbers a, b, c the trivial space is the only space that satisfies a ⊞ b = c . We establish that there is no analogue of the law of large numbers: if X1, X2, … is an identically distributed independent sequence of random spaces, then no subsequence of 1n⊞k=1nXk converges in distribution unless each Xk is almost surely equal to the trivial space. We characterize the infinitely divisible probability measures and the Lévy processes on this semigroup, characterize the stable probability measures and establish a counterpart of the LePage representation for the latter class. PMID:28065980

  17. The effects of smoking status and ventilation on environmental tobacco smoke concentrations in public areas of UK pubs and bars

    NASA Astrophysics Data System (ADS)

    Carrington, Joanna; Watson, Adrian F. R.; Gee, Ivan L.

    UK public houses generally allow smoking to occur and consequently customer ETS exposure can take place. To address this, in 1999 the UK Government and the hospitality industry initiated the Public Places Charter (PPC) to increase non-smoking facilities and provide better ventilation in public houses. A study involving 60 UK pubs, located in Greater Manchester, was conducted to investigate the effects of smoking area status and ventilation on ETS concentrations. ETS markers RSP, UVPM, FPM, SolPM and nicotine were sampled and analysed using established methodologies. ETS marker concentrations were significantly higher ( P < 0.05) in the smoking areas compared to the non-smoking areas of pubs that contained both smoking and non-smoking sections. Median concentrations of RSP and nicotine were reduced by 18% and 68%, respectively, in non-smoking areas. UVPM, FPM and SolPM median concentrations were reduced by 27%, 34% and 39%, demonstrating the increased tobacco-specificity of the particulate markers and the impact of non-smoking areas. Levels of particulate phase ETS markers were also found to be higher in the smoking sections of pubs that allowed smoking throughout compared to the smoking sections of pubs with other areas where smoking was prohibited. The presence of a non-smoking section has the effect of reducing concentrations even in the smoking areas. This may be caused by migration of smoke into the non-smoking section thereby diluting the smoking area or by smokers tending to avoid pubs with non-smoking areas thus reducing source strengths in the smoking areas of these pubs. Nicotine concentrations were not found to be significantly different in smoking areas of the two types of establishment indicating that nicotine is not as mobile in these environments and tends to remain in the smoking areas. This result, together with the much higher reductions in nicotine concentrations between smoking and non-smoking areas compared to other markers, suggests that nicotine is not the most suitable marker to use in these environments as an indicator of the effectiveness of tobacco control policies. The use of ventilation systems (sophisticated HVAC systems and extractor fans in either the on or off mode) did not have a significant effect ( P > 0.05) on ETS marker concentrations in either the smoking or non-smoking areas. The PPC aims to reduce non-smoking customers' exposure through segregation and ventilation and provide customer choice though appropriate signs. This study indicates that although ETS levels are lower in non-smoking sections and signs will assist customers in reducing their exposure, some exposure will still occur because ETS was detected in non-smoking areas. Existing ventilation provision was not effective in reducing exposure and signs advertising ventilated premises may be misleading to customers. Improvements in the design and management of ventilation systems in pubs and bars are required to reduce customer exposure to ETS, if the aims of the PPC are to be met.

  18. Suppressive composts from organic wastes as agents of biological control of fusariosis in Tatartan Republic (Russia)

    NASA Astrophysics Data System (ADS)

    Gumerova, Raushaniya; Galitskaya, Polina; Beru, Franchesca; Selivanovskaya, Svetlana

    2015-04-01

    Plant diseases are one of the seriously limiting factors of agriculture efficiency around the world. Diseases caused by fungi are the major threat to plants. Crop protection in modern agriculture heavily depends on chemical fungicides. Disadvantages of chemical pesticides soon became apparent as damage to the environment and a hazard to human health. In this regard use of biopesticides becomes an attractive alternative method of plant protection. For biological control of fungal plant diseases, separate bacterial or fungal strains as well as their communities can be used. Biopreparations must consist of microbes that are typical for local climate and soil conditions and therefore are able to survive in environments for a long time. Another option of plant pests' biological control is implementation of suppressive composts made of agricultural or other organic wastes. These composts can not only prevent the development of plant diseases, but also improve the soil fertility. The objective of this work was estimation of potential of composts and strains isolated from these composts as means for biological control of fusariosis that is one of the most widespread plant soil born disease. The composts were made up of the commonly produced agricultural wastes produced in Tatarstan Republic (Russia). Fusarium oxysporum f. sp. radicis-lycopersici was used as a model phytopathogen. Ten types of organic waste (Goat manure (GM), Chicken dung (CD), Chicken dung with straw addition (CS), Rabbit dung (RD), Cow manure (CM), Rerotting pork manure (RPM), Fresh pork manure (FPM), Pork manure with sawdust and straw (PMS), the remains of plants and leaves (PL), the vegetable waste (VW) were sampled in the big farms situated in Tatarstan Republic which is one of the main agricultural regions of Russia. The initial wastes were composted for 150 days. Further, the following characteristics of the composts were assessed: pH, electro conductivity, TOC, DOC, Ntot. On petri dishes with meat pepton agar, the composts and their water extracts were checked towards their ability to inhibit growth of F. oxysporum. It was shown that three composts - CD, FPM and RD - possessed suppressiveness towards the model phytopathogen. From these three wastes, 28 bacterial and fungal strains were isolated and, in their turn, checked towards their ability to inhibit F. oxysporum. It was demonstrated that five of the isolated strains are highly suppressive to model test-object (the growth area of F. oxysporum did not exceed 30%), six of the stains were moderate suppressive (the growth area of F. oxysporum ranged from 35% to 60%), and other strains did not cause negative effects for the model phytopathogen. Further, we will check the composts and the isolated strains using the model system "soil - tomato plant - phytopathogen". As a result, effective composts and strains will be recommended as agents for biological control of fungal diseases in the region. Besides, the structure of bacterial and fungal community of the composts with suppressive properties will be assessed using 454-pyrosequencing.

  19. Economic Choices Reveal Probability Distortion in Macaque Monkeys

    PubMed Central

    Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-01-01

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750

  20. Economic choices reveal probability distortion in macaque monkeys.

    PubMed

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  1. Fundamental Studies of Molecular Secondary Ion Mass Spectrometry Ionization Probability Measured With Femtosecond, Infrared Laser Post-Ionization

    NASA Astrophysics Data System (ADS)

    Popczun, Nicholas James

    The work presented in this dissertation is focused on increasing the fundamental understanding of molecular secondary ion mass spectrometry (SIMS) ionization probability by measuring neutral molecule behavior with femtosecond, mid-infrared laser post-ionization (LPI). To accomplish this, a model system was designed with a homogeneous organic film comprised of coronene, a polycyclic hydrocarbon which provides substantial LPI signal. Careful consideration was given to signal lost to photofragmentation and undersampling of the sputtered plume that is contained within the extraction volume of the mass spectrometer. This study provided the first ionization probability for an organic compound measured directly by the relative secondary ions and sputtered neutral molecules using a strong-field ionization (SFI) ionization method. The measured value of ˜10-3 is near the upper limit of previous estimations of ionization probability for organic molecules. The measurement method was refined, and then applied to a homogeneous guanine film, which produces protonated secondary ions. This measurement found the probability of protonation to occur to be on the order of 10-3, although with less uncertainty than that of the coronene. Finally, molecular depth profiles were obtained for SIMS and LPI signals as a function of primary ion fluence to determine the effect of ionization probability on the depth resolution of chemical interfaces. The interfaces chosen were organic/inorganic interfaces to limit chemical mixing. It is shown that approaching the inorganic chemical interface can enhance or suppress the ionization probability for the organic molecule, which can lead to artificially sharpened or broadened depths, respectively. Overall, the research described in this dissertation provides new methods for measuring ionization efficiency in SIMS in both absolute and relative terms, and will inform both innovation in the technique, as well as increase understanding of depth-dependent experiments.

  2. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the ;unusual; properties of the WV, and avoid the ;paradoxes; often associated with the WM.

  3. What is preexisting strength? Predicting free association probabilities, similarity ratings, and cued recall probabilities.

    PubMed

    Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B

    2005-08-01

    Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.

  4. Measuring attention using the Posner cuing paradigm: the role of across and within trial target probabilities

    PubMed Central

    Hayward, Dana A.; Ristic, Jelena

    2013-01-01

    Numerous studies conducted within the recent decades have utilized the Posner cuing paradigm for eliciting, measuring, and theoretically characterizing attentional orienting. However, the data from recent studies suggest that the Posner cuing task might not provide an unambiguous measure of attention, as reflexive spatial orienting has been found to interact with extraneous processes engaged by the task's typical structure, i.e., the probability of target presence across trials, which affects tonic alertness, and the probability of target presence within trials, which affects voluntary temporal preparation. To understand the contribution of each of these two processes to the measurement of attentional orienting we assessed their individual and combined effects on reflexive attention elicited by a spatially nonpredictive peripheral cue. Our results revealed that the magnitude of spatial orienting was modulated by joint changes in the global probability of target presence across trials and the local probability of target presence within trials, while the time course of spatial orienting was susceptible to changes in the probability of target presence across trials. These data thus raise important questions about the choice of task parameters within the Posner cuing paradigm and their role in both the measurement and theoretical attributions of the observed attentional effects. PMID:23730280

  5. CPROB: A COMPUTATIONAL TOOL FOR CONDUCTING CONDITIONAL PROBABILITY ANALYSIS

    EPA Science Inventory

    Conditional probability analysis measures the probability of observing one event given that another event has occurred. In an environmental context, conditional probability analysis helps assess the association between an environmental contaminant (i.e. the stressor) and the ec...

  6. Evaluation of the 1077 keV γ-ray emission probability from 68Ga decay

    NASA Astrophysics Data System (ADS)

    Huang, Xiao-Long; Jiang, Li-Yang; Chen, Xiong-Jun; Chen, Guo-Chang

    2014-04-01

    68Ga decays to the excited states of 68Zn through the electron capture decay mode. New recommended values for the emission probability of 1077 keV γ-ray given by the ENSDF and DDEP databases all use data from absolute measurements. In 2011, JIANG Li-Yang deduced a new value for 1077 keV γ-ray emission probability by measuring the 69Ga(n,2n) 68Ga reaction cross section. The new value is about 20% lower than values obtained from previous absolute measurements and evaluations. In this paper, the discrepancies among the measurements and evaluations are analyzed carefully and the new values are re-recommended. Our recommended value for the emission probability of 1077 keV γ-ray is (2.72±0.16)%.

  7. Quantum-Bayesian coherence

    NASA Astrophysics Data System (ADS)

    Fuchs, Christopher A.; Schack, Rüdiger

    2013-10-01

    In the quantum-Bayesian interpretation of quantum theory (or QBism), the Born rule cannot be interpreted as a rule for setting measurement-outcome probabilities from an objective quantum state. But if not, what is the role of the rule? In this paper, the argument is given that it should be seen as an empirical addition to Bayesian reasoning itself. Particularly, it is shown how to view the Born rule as a normative rule in addition to usual Dutch-book coherence. It is a rule that takes into account how one should assign probabilities to the consequences of various intended measurements on a physical system, but explicitly in terms of prior probabilities for and conditional probabilities consequent upon the imagined outcomes of a special counterfactual reference measurement. This interpretation is exemplified by representing quantum states in terms of probabilities for the outcomes of a fixed, fiducial symmetric informationally complete measurement. The extent to which the general form of the new normative rule implies the full state-space structure of quantum mechanics is explored.

  8. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  9. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  10. High probability neurotransmitter release sites represent an energy efficient design

    PubMed Central

    Lu, Zhongmin; Chouhan, Amit K.; Borycz, Jolanta A.; Lu, Zhiyuan; Rossano, Adam J; Brain, Keith L.; Zhou, You; Meinertzhagen, Ian A.; Macleod, Gregory T.

    2016-01-01

    Nerve terminals contain multiple sites specialized for the release of neurotransmitters. Release usually occurs with low probability, a design thought to confer many advantages. High probability release sites are not uncommon but their advantages are not well understood. Here we test the hypothesis that high probability release sites represent an energy efficient design. We examined release site probabilities and energy efficiency at the terminals of two glutamatergic motor neurons synapsing on the same muscle fiber in Drosophila larvae. Through electrophysiological and ultrastructural measurements we calculated release site probabilities to differ considerably between terminals (0.33 vs. 0.11). We estimated the energy required to release and recycle glutamate from the same measurements. The energy required to remove calcium and sodium ions subsequent to nerve excitation was estimated through microfluorimetric and morphological measurements. We calculated energy efficiency as the number of glutamate molecules released per ATP molecule hydrolyzed, and high probability release site terminals were found to be more efficient (0.13 vs. 0.06). Our analytical model indicates that energy efficiency is optimal (~0.15) at high release site probabilities (~0.76). As limitations in energy supply constrain neural function, high probability release sites might ameliorate such constraints by demanding less energy. Energy efficiency can be viewed as one aspect of nerve terminal function, in balance with others, because high efficiency terminals depress significantly during episodic bursts of activity. PMID:27593375

  11. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  12. Nitrous oxide as a tracer gas in the ASHRAE 110-1995 Standard.

    PubMed

    Burke, Martin; Wong, Larry; Gonzales, Ben A; Knutson, Gerhard

    2014-01-01

    ANSI/ASHRAE Standard 110 provides a quantitative method for testing the performance of laboratory fume hoods. Through release of a known quantity (4.0 Lpm) of a tracer gas, and subsequent monitoring of the tracer gas concentration in the "breathing zone" of a mannequin positioned in front of the hood, this method allows for evaluation of laboratory hood performance. Standard 110 specifies sulfur hexafluoride (SF6) as the tracer gas; however, suitable alternatives are allowed. Through three series of performance tests, this analysis serves to investigate the use of nitrous oxide (N2O) as an alternate tracer gas for hood performance testing. Single gas tests were performed according to ASHRAE Standard 110-1995 with each tracer gas individually. These tests showed identical results using an acceptance criterion of AU 0.1 with the sash half open, nominal 18 inches (0.46m) high, and the face velocity at a nominal 60 fpm (0.3 m/s). Most data collected in these single gas tests, for both tracer gases, were below the minimum detection limit, thus two dual gas tests were developed for simultaneous sampling of both tracer gases. Dual gas dual ejector tests were performed with both tracer gases released simultaneously through two ejectors, and the concentration measured with two detectors using a common sampling probe. Dual gas single ejector tests were performed with both tracer gases released though a single ejector, and the concentration measured in the same manner as the dual gas dual ejector tests. The dual gas dual ejector tests showed excellent correlation, with R typically greater than 0.9. Variance was observed in the resulting regression line for each hood, likely due to non-symmetry between the two challenges caused by variables beyond the control of the investigators. Dual gas single ejector tests resulted in exceptional correlation, with R>0.99 typically for the consolidated data, with a slope of 1.0. These data indicate equivalent results for ASHRAE 110 performance testing using either SF6 or N2O, indicating N2O as an applicable alternate tracer gas.

  13. Measurement of absolute gamma emission probabilities

    NASA Astrophysics Data System (ADS)

    Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.

    2003-06-01

    The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.

  14. Measures of School Integration: Comparing Coleman's Index to Measures of Species Diversity.

    ERIC Educational Resources Information Center

    Mercil, Steven Bray; Williams, John Delane

    This study used species diversity indices developed in ecology as a measure of socioethnic diversity, and compared them to Coleman's Index of Segregation. The twelve indices were Simpson's Concentration Index ("ell"), Simpson's Index of Diversity, Hurlbert's Probability of Interspecific Encounter (PIE), Simpson's Probability of…

  15. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  16. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  17. Command and Control Systems Requirements Analysis. Volume 2. Measuring C2 Effectiveness with Decision Probability

    DTIC Science & Technology

    1990-09-01

    MEASURING C2 EFFECTIVENESS WUIlT DECISION PROBABILITY SEPTEMBER 1990 TABLE OF CONTENTS 1.0 IN T R O D U CTIIO N...i ’i | " i | , TABLE OF CONTENTS (Continued) Ra. 5.0 EXPRESSING REQUIREMENTS WITH PROBABILITY .................................... 15 5.1...gaitrering and maintaining the data needed, and complfeting and reviewing the coiierction of Intoirmallon Send continnts regarding MAr burden asilmate o

  18. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  19. Asymptotic Equivalence of Probability Measures and Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Touchette, Hugo

    2018-03-01

    Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.

  20. Open-air sprays for capturing and controlling airborne float coal dust on longwall faces

    PubMed Central

    Beck, T.W.; Seaman, C.E.; Shahan, M.R.; Mischler, S.E.

    2018-01-01

    Float dust deposits in coal mine return airways pose a risk in the event of a methane ignition. Controlling airborne dust prior to deposition in the return would make current rock dusting practices more effective and reduce the risk of coal-dust-fueled explosions. The goal of this U.S. National Institute for Occupational Safety and Health study is to determine the potential of open-air water sprays to reduce concentrations of airborne float coal dust, smaller than 75 µm in diameter, in longwall face airstreams. This study evaluated unconfined water sprays in a featureless tunnel ventilated at a typical longwall face velocity of 3.6 m/s (700 fpm). Experiments were conducted for two nozzle orientations and two water pressures for hollow cone, full cone, flat fan, air atomizing and hydraulic atomizing spray nozzles. Gravimetric samples show that airborne float dust removal efficiencies averaged 19.6 percent for all sprays under all conditions. The results indicate that the preferred spray nozzle should be operated at high fluid pressures to produce smaller droplets and move more air. These findings agree with past respirable dust control research, providing guidance on spray selection and spray array design in ongoing efforts to control airborne float dust over the entire longwall ventilated opening. PMID:29348700

  1. Fourier Ptychographic Microscopy for Rapid, High-Resolution Imaging of Circulating Tumor Cells Enriched by Microfiltration.

    PubMed

    Williams, Anthony; Chung, Jaebum; Yang, Changhuei; Cote, Richard J

    2017-01-01

    Examining the hematogenous compartment for evidence of metastasis has increased significantly within the oncology research community in recent years, due to the development of technologies aimed at the enrichment of circulating tumor cells (CTCs), the subpopulation of primary tumor cells that gain access to the circulatory system and are responsible for colonization at distant sites. In contrast to other technologies, filtration-based CTC enrichment, which exploits differences in size between larger tumor cells and surrounding smaller, non-tumor blood cells, has the potential to improve CTC characterization through isolation of tumor cell populations with greater molecular heterogeneity. However, microscopic analysis of uneven filtration surfaces containing CTCs is laborious, time-consuming, and inconsistent, preventing widespread use of filtration-based enrichment technologies. Here, integrated with a microfiltration-based CTC and rare cell enrichment device we have previously described, we present a protocol for Fourier Ptychographic Microscopy (FPM), a method that, unlike many automated imaging platforms, produces high-speed, high-resolution images that can be digitally refocused, allowing users to observe objects of interest present on multiple focal planes within the same image frame. The development of a cost-effective and high-throughput CTC analysis system for filtration-based enrichment technologies could have profound clinical implications for improved CTC detection and analysis.

  2. Simulations of dynamics of plunge and pitch of a three-dimensional flexible wing in a low Reynolds number flow

    NASA Astrophysics Data System (ADS)

    Qi, Dewei; Liu, Yingming; Shyy, Wei; Aono, Hikaru

    2010-09-01

    The lattice Boltzmann flexible particle method (LBFPM) is used to simulate fluid-structure interaction and motion of a flexible wing in a three-dimensional space. In the method, a beam with rectangular cross section has been discretized into a chain of rigid segments. The segments are connected through ball and socket joints at their ends and may be bent and twisted. Deformation of flexible structure is treated with a linear elasticity model through bending and twisting. It is demonstrated that the flexible particle method (FPM) can approximate the nonlinear Euler-Bernoulli beam equation without resorting to a nonlinear elasticity model. Simulations of plunge and pitch of flexible wing at Reynolds number Re=136 are conducted in hovering condition by using the LBFPM. It is found that both lift and drag forces increase first, then decrease dramatically as the bending rigidity in spanwise direction decreases and that the lift and drag forces are sensitive to rigidity in a certain range. It is shown that the downwash flows induced by wing tip and trailing vortices in wake area are larger for a flexible wing than for a rigid wing, lead to a smaller effective angle of attack, and result in a larger lift force.

  3. HIGH-ENERGY OBSERVATIONS OF PSR B1259–63/LS 2883 THROUGH THE 2014 PERIASTRON PASSAGE: CONNECTING X-RAYS TO THE GeV FLARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tam, P. H. T.; Li, K. L.; Kong, A. K. H.

    2015-01-01

    The binary system PSR B1259–63/LS 2883 is well sampled in radio, X-rays, and TeV γ-rays, and shows orbital-phase-dependent variability in these frequencies. The first detection of GeV γ-rays from the system was made around the 2010 periastron passage. In this Letter, we present an analysis of X-ray and γ-ray data obtained by the Swift/XRT, NuSTAR/FPM, and Fermi/LAT, through the recent periastron passage which occurred on 2014 May 4. While PSR B1259–63/LS 2883 was not detected by the Large Area Telescope before and during this passage, we show that the GeV flares occurred at a similar orbital phase as in earlymore » 2011, thus establishing the repetitive nature of the post-periastron GeV flares. Multiple flares each lasting for a few days have been observed and short-term variability is seen as well. We also found X-ray flux variation contemporaneous with the GeV flare for the first time. Strong evidence of the keV-to-GeV connection came from the broadband high-energy spectra, which we interpret as synchrotron radiation from the shocked pulsar wind.« less

  4. Open-air sprays for capturing and controlling airborne float coal dust on longwall faces.

    PubMed

    Beck, T W; Seaman, C E; Shahan, M R; Mischler, S E

    2018-01-01

    Float dust deposits in coal mine return airways pose a risk in the event of a methane ignition. Controlling airborne dust prior to deposition in the return would make current rock dusting practices more effective and reduce the risk of coal-dust-fueled explosions. The goal of this U.S. National Institute for Occupational Safety and Health study is to determine the potential of open-air water sprays to reduce concentrations of airborne float coal dust, smaller than 75 µm in diameter, in longwall face airstreams. This study evaluated unconfined water sprays in a featureless tunnel ventilated at a typical longwall face velocity of 3.6 m/s (700 fpm). Experiments were conducted for two nozzle orientations and two water pressures for hollow cone, full cone, flat fan, air atomizing and hydraulic atomizing spray nozzles. Gravimetric samples show that airborne float dust removal efficiencies averaged 19.6 percent for all sprays under all conditions. The results indicate that the preferred spray nozzle should be operated at high fluid pressures to produce smaller droplets and move more air. These findings agree with past respirable dust control research, providing guidance on spray selection and spray array design in ongoing efforts to control airborne float dust over the entire longwall ventilated opening.

  5. The influence of a surface tension minimum on the convective motion of a fluid in microgravity (D1 mission results)

    NASA Astrophysics Data System (ADS)

    Limbourg, M. C.; Legros, J. C.; Petre, G.

    The experiment STEM (Surface Tension Minimum) was performed in an experimental cell integrated in the FMP (Fluid Physics Module) during the D1 mission of Spacelab. The observation volume (1×2×3) cm3 was constituted by a stainless steel frame and by two optical Pyrex windows. It was fixed on the front disk of the FPM. The cell was filled under microgravity conditions by an aqueous solution of n-heptanol 6,04 10-3 molal. At equilibrium this system presents a minimum of surface tension as a function of temperature around 40°C. The fluid was heated from the front disk side of the cell. A temperature difference of 35°C was maintained between two opposite sides of the cell, by using the large heat capacity of a water reservoir in thermal contact with the cold side of the cell. The thermal gradient was parallel to the liquid/gas interface. The motions of the fluid were recorded on video-tapes and the velocities were determined by following latex particles used as tracers. The convective pattern is analysed and compared with ground experiments. In this case the tracer trajectories allow to determine the convective patterns and the velocities are determined by laser doppler anemometry.

  6. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  7. Quantum probability and quantum decision-making.

    PubMed

    Yukalov, V I; Sornette, D

    2016-01-13

    A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).

  8. The contribution of threat probability estimates to reexperiencing symptoms: a prospective analog study.

    PubMed

    Regambal, Marci J; Alden, Lynn E

    2012-09-01

    Individuals with posttraumatic stress disorder (PTSD) are hypothesized to have a "sense of current threat." Perceived threat from the environment (i.e., external threat), can lead to overestimating the probability of the traumatic event reoccurring (Ehlers & Clark, 2000). However, it is unclear if external threat judgments are a pre-existing vulnerability for PTSD or a consequence of trauma exposure. We used trauma analog methodology to prospectively measure probability estimates of a traumatic event, and investigate how these estimates were related to cognitive processes implicated in PTSD development. 151 participants estimated the probability of being in car-accident related situations, watched a movie of a car accident victim, and then completed a measure of data-driven processing during the movie. One week later, participants re-estimated the probabilities, and completed measures of reexperiencing symptoms and symptom appraisals/reactions. Path analysis revealed that higher pre-existing probability estimates predicted greater data-driven processing which was associated with negative appraisals and responses to intrusions. Furthermore, lower pre-existing probability estimates and negative responses to intrusions were both associated with a greater change in probability estimates. Reexperiencing symptoms were predicted by negative responses to intrusions and, to a lesser degree, by greater changes in probability estimates. The undergraduate student sample may not be representative of the general public. The reexperiencing symptoms are less severe than what would be found in a trauma sample. Threat estimates present both a vulnerability and a consequence of exposure to a distressing event. Furthermore, changes in these estimates are associated with cognitive processes implicated in PTSD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Measurements of gas hydrate formation probability distributions on a quasi-free water droplet

    NASA Astrophysics Data System (ADS)

    Maeda, Nobuo

    2014-06-01

    A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.

  10. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  12. Trapping dynamics of xenon on Pt(111)

    NASA Astrophysics Data System (ADS)

    Arumainayagam, Christopher R.; Madix, Robert J.; Mcmaster, Mark C.; Suzawa, Valerie M.; Tully, John C.

    1990-02-01

    The dynamics of Xe trapping on Pt(111) was studied using supersonic atomic beam techniques. Initial trapping probabilities ( S0) were measured directly as a function of incident translational energy ( EinT) and angle of incidence (θ i) at a surface temperature ( Tins) 95 K. The initial trapping probability decreases smoothly with increasing ET cosθ i;, rather than ET cos 2θ i, suggesting participation of parallel momentum in the trapping process. Accordingly, the measured initial trapping probability falls off more slowly with increasing incident translational energy than predicted by one-dimensional theories. This finding is in near agreement with previous mean translational energy measurements for Xe desorbing near the Pt(111) surface normal, assuming detailed balance applies. Three-dimensional stochastic classical trajectory calculations presented herein also exhibit the importance of tangential momentum in trapping and satisfactorily reproduce the experimental initial trapping probabilities.

  13. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  14. Transitional probability-based model for HPV clearance in HIV-1-positive adolescent females.

    PubMed

    Kravchenko, Julia; Akushevich, Igor; Sudenga, Staci L; Wilson, Craig M; Levitan, Emily B; Shrestha, Sadeep

    2012-01-01

    HIV-1-positive patients clear the human papillomavirus (HPV) infection less frequently than HIV-1-negative. Datasets for estimating HPV clearance probability often have irregular measurements of HPV status and risk factors. A new transitional probability-based model for estimation of probability of HPV clearance was developed to fully incorporate information on HIV-1-related clinical data, such as CD4 counts, HIV-1 viral load (VL), highly active antiretroviral therapy (HAART), and risk factors (measured quarterly), and HPV infection status (measured at 6-month intervals). Data from 266 HIV-1-positive and 134 at-risk HIV-1-negative adolescent females from the Reaching for Excellence in Adolescent Care and Health (REACH) cohort were used in this study. First, the associations were evaluated using the Cox proportional hazard model, and the variables that demonstrated significant effects on HPV clearance were included in transitional probability models. The new model established the efficacy of CD4 cell counts as a main clearance predictor for all type-specific HPV phylogenetic groups. The 3-month probability of HPV clearance in HIV-1-infected patients significantly increased with increasing CD4 counts for HPV16/16-like (p<0.001), HPV18/18-like (p<0.001), HPV56/56-like (p = 0.05), and low-risk HPV (p<0.001) phylogenetic groups, with the lowest probability found for HPV16/16-like infections (21.60±1.81% at CD4 level 200 cells/mm(3), p<0.05; and 28.03±1.47% at CD4 level 500 cells/mm(3)). HIV-1 VL was a significant predictor for clearance of low-risk HPV infections (p<0.05). HAART (with protease inhibitor) was significant predictor of probability of HPV16 clearance (p<0.05). HPV16/16-like and HPV18/18-like groups showed heterogeneity (p<0.05) in terms of how CD4 counts, HIV VL, and HAART affected probability of clearance of each HPV infection. This new model predicts the 3-month probability of HPV infection clearance based on CD4 cell counts and other HIV-1-related clinical measurements.

  15. Betting on the outcomes of measurements: a Bayesian theory of quantum probability

    NASA Astrophysics Data System (ADS)

    Pitowsky, Itamar

    We develop a systematic approach to quantum probability as a theory of rational betting in quantum gambles. In these games of chance, the agent is betting in advance on the outcomes of several (finitely many) incompatible measurements. One of the measurements is subsequently chosen and performed and the money placed on the other measurements is returned to the agent. We show how the rules of rational betting imply all the interesting features of quantum probability, even in such finite gambles. These include the uncertainty principle and the violation of Bell's inequality among others. Quantum gambles are closely related to quantum logic and provide a new semantics for it. We conclude with a philosophical discussion on the interpretation of quantum mechanics.

  16. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  17. Experimental investigation of the intensity fluctuation joint probability and conditional distributions of the twin-beam quantum state.

    PubMed

    Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi

    2003-01-13

    We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.

  18. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  19. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  20. A discrimination method for the detection of pneumonia using chest radiograph.

    PubMed

    Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R

    2010-03-01

    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuevas, F.A.; Curilef, S., E-mail: scurilef@ucn.cl; Plastino, A.R., E-mail: arplastino@ugr.es

    The spread of a wave-packet (or its deformation) is a very important topic in quantum mechanics. Understanding this phenomenon is relevant in connection with the study of diverse physical systems. In this paper we apply various 'spreading measures' to characterize the evolution of an initially localized wave-packet in a tight-binding lattice, with special emphasis on information-theoretical measures. We investigate the behavior of both the probability distribution associated with the wave packet and the concomitant probability current. Complexity measures based upon Renyi entropies appear to be particularly good descriptors of the details of the delocalization process. - Highlights: > Spread ofmore » highly localized wave-packet in the tight-binding lattice. > Entropic and information-theoretical characterization is used to understand the delocalization. > The behavior of both the probability distribution and the concomitant probability current is investigated. > Renyi entropies appear to be good descriptors of the details of the delocalization process.« less

  2. Lattice Theory, Measures and Probability

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2007-11-01

    In this tutorial, I will discuss the concepts behind generalizing ordering to measuring and apply these ideas to the derivation of probability theory. The fundamental concept is that anything that can be ordered can be measured. Since we are in the business of making statements about the world around us, we focus on ordering logical statements according to implication. This results in a Boolean lattice, which is related to the fact that the corresponding logical operations form a Boolean algebra. The concept of logical implication can be generalized to degrees of implication by generalizing the zeta function of the lattice. The rules of probability theory arise naturally as a set of constraint equations. Through this construction we are able to neatly connect the concepts of order, structure, algebra, and calculus. The meaning of probability is inherited from the meaning of the ordering relation, implication, rather than being imposed in an ad hoc manner at the start.

  3. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  4. Determination of the measurement threshold in gamma-ray spectrometry.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2017-03-01

    In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Inconclusive quantum measurements and decisions under uncertainty

    NASA Astrophysics Data System (ADS)

    Yukalov, Vyacheslav; Sornette, Didier

    2016-04-01

    We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.

  6. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  7. Measurement of Two- and Three-Nucleon Short-Range Correlation Probabilities in Nuclei

    NASA Astrophysics Data System (ADS)

    Egiyan, K. S.; Dashyan, N. B.; Sargsian, M. M.; Strikman, M. I.; Weinstein, L. B.; Adams, G.; Ambrozewicz, P.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Avakian, H.; Baghdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Batourine, V.; Battaglieri, M.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Biselli, A. S.; Bonner, B. E.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Bultuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Chen, S.; Cole, P. L.; Coltharp, P.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Sanctis, E. De; Devita, R.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Dharmawardane, K. V.; Djalali, C.; Dodge, G. E.; Donnelly, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gevorgyan, N. G.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hardie, J.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Hu, J.; Huertas, M.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Joo, K.; Juengst, H. G.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A.; Klusman, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Livingston, K.; Maximon, L. C.; McAleer, S.; McKinnon, B.; McNabb, J. W.; Mecking, B. A.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morrow, S. A.; Mueller, J.; Mutchler, G. S.; Nadel-Turonski, P.; Napolitano, J.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Pasyuk, E.; Peterson, C.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Suleiman, R.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thompson, R.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, J.

    2006-03-01

    The ratios of inclusive electron scattering cross sections of 4He, 12C, and 56Fe to 3He have been measured at 11.4 GeV2, the ratios exhibit two separate plateaus, at 1.52.25. This pattern is predicted by models that include 2- and 3-nucleon short-range correlations (SRC). Relative to A=3, the per-nucleon probabilities of 3-nucleon SRC are 2.3, 3.1, and 4.4 times larger for A=4, 12, and 56. This is the first measurement of 3-nucleon SRC probabilities in nuclei.

  8. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  9. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  10. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  11. Alpha-particle emission probabilities of ²³⁶U obtained by alpha spectrometry.

    PubMed

    Marouli, M; Pommé, S; Jobbágy, V; Van Ammel, R; Paepen, J; Stroh, H; Benedik, L

    2014-05-01

    High-resolution alpha-particle spectrometry was performed with an ion-implanted silicon detector in vacuum on a homogeneously electrodeposited (236)U source. The source was measured at different solid angles subtended by the detector, varying between 0.8% and 2.4% of 4π sr, to assess the influence of coincidental detection of alpha-particles and conversion electrons on the measured alpha-particle emission probabilities. Additional measurements were performed using a bending magnet to eliminate conversion electrons, the results of which coincide with normal measurements extrapolated to an infinitely small solid angle. The measured alpha emission probabilities for the three main peaks - 74.20 (5)%, 25.68 (5)% and 0.123 (5)%, respectively - are consistent with literature data, but their precision has been improved by at least one order of magnitude in this work. © 2013 Published by Elsevier Ltd.

  12. Syntax for calculation of discounting indices from the monetary choice questionnaire and probability discounting questionnaire.

    PubMed

    Gray, Joshua C; Amlung, Michael T; Palmer, Abraham A; MacKillop, James

    2016-09-01

    The 27-item Monetary Choice Questionnaire (MCQ; Kirby, Petry, & Bickel, 1999) and 30-item Probability Discounting Questionnaire (PDQ; Madden, Petry, & Johnson, 2009) are widely used, validated measures of preferences for immediate versus delayed rewards and guaranteed versus risky rewards, respectively. The MCQ measures delayed discounting by asking individuals to choose between rewards available immediately and larger rewards available after a delay. The PDQ measures probability discounting by asking individuals to choose between guaranteed rewards and a chance at winning larger rewards. Numerous studies have implicated these measures in addiction and other health behaviors. Unlike typical self-report measures, the MCQ and PDQ generate inferred hyperbolic temporal and probability discounting functions by comparing choice preferences to arrays of functions to which the individual items are preconfigured. This article provides R and SPSS syntax for processing the MCQ and PDQ. Specifically, for the MCQ, the syntax generates k values, consistency of the inferred k, and immediate choice ratios; for the PDQ, the syntax generates h indices, consistency of the inferred h, and risky choice ratios. The syntax is intended to increase the accessibility of these measures, expedite the data processing, and reduce risk for error. © 2016 Society for the Experimental Analysis of Behavior.

  13. Using Rasch Analysis to Explore What Students Learn about Probability Concepts

    ERIC Educational Resources Information Center

    Mahmud, Zamalia; Porter, Anne

    2015-01-01

    Students' understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is…

  14. 40 CFR 201.28 - Testing by railroad to determine probable compliance with the standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Testing by railroad to determine...; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.28 Testing by railroad to determine probable compliance... whether it should institute noise abatement, a railroad may take measurements on its own property at...

  15. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  16. Molar incisor hypomineralization: proportion and severity in primary public school children in Graz, Austria.

    PubMed

    Buchgraber, Barbara; Kqiku, Lumnije; Ebeleseder, Kurt A

    2018-03-01

    The aim of this study was to determine the proportion and severity of molar incisor hypomineralization (MIH) in primary school children in Graz (southeast of Austria). In 1111 children aged 6 to 12 years (mean age 9.0 ± 1.2), a wet examination of all teeth was performed by three trained examiners using a dental chair, optimal illumination, a dental mirror, and a dental explorer. All teeth with MIH lesions were registered so that different definitions of MIH were applicable. According to the European Academy of Pediatric Dentistry criteria that were considered valid at the time of the investigation, MIH was diagnosed when at least one first primary molar (FPM) was affected. MIH was present in 78 children (7.0%). In 64 children (5.8%), at least one molar and one incisor were affected (so-called M + IH). Additionally, in 9 children, only incisors were affected. In 7 affected children, teeth other than FPMs and incisors had MIH lesions. Almost an equal number of males (38) and females (40) were affected. The upper and lower molars were equally affected. The upper incisors were more frequently affected than the lower ones. Demarcated enamel opacities were the predominant types of defects. The proportion of MIH was 7.0% in Graz, which is similar to other comparable trials. This study has proven that MIH is an existing dental problem in Graz.

  17. Evaluation of an Ensemble Dispersion Calculation.

    NASA Astrophysics Data System (ADS)

    Draxler, Roland R.

    2003-02-01

    A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marleau, Peter; Monterial, Mateusz; Clarke, Shaun

    A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less

  19. The Formalism of Generalized Contexts and Decay Processes

    NASA Astrophysics Data System (ADS)

    Losada, Marcelo; Laura, Roberto

    2013-04-01

    The formalism of generalized contexts for quantum histories is used to investigate the possibility to consider the survival probability as the probability of no decay property at a given time conditional to no decay property at an earlier time. A negative result is found for an isolated system. The inclusion of two quantum measurement instruments at two different times makes possible to interpret the survival probability as a conditional probability of the whole system.

  20. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  1. Sufficient Statistics for Divergence and the Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Quirein, J.

    1972-01-01

    One particular aspect is considered of the feature selection problem which results from the transformation x=Bz, where B is a k by n matrix of rank k and k is or = to n. It is shown that in general, such a transformation results in a loss of information. In terms of the divergence, this is equivalent to the fact that the average divergence computed using the variable x is less than or equal to the average divergence computed using the variable z. A loss of information in terms of the probability of misclassification is shown to be equivalent to the fact that the probability of misclassification computed using variable x is greater than or equal to the probability of misclassification computed using variable z. First, the necessary facts relating k-dimensional and n-dimensional integrals are derived. Then the mentioned results about the divergence and probability of misclassification are derived. Finally it is shown that if no information is lost (in x = Bz) as measured by the divergence, then no information is lost as measured by the probability of misclassification.

  2. Segmentation and automated measurement of chronic wound images: probability map approach

    NASA Astrophysics Data System (ADS)

    Ahmad Fauzi, Mohammad Faizal; Khansa, Ibrahim; Catignani, Karen; Gordillo, Gayle; Sen, Chandan K.; Gurcan, Metin N.

    2014-03-01

    estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.

  3. Measurement of Plutonium-240 Angular Momentum Dependent Fission Probabilities Using the Alpha-Alpha' Reaction

    NASA Astrophysics Data System (ADS)

    Koglin, Johnathon

    Accurate nuclear reaction data from a few keV to tens of MeV and across the table of nuclides is essential to a number of applications of nuclear physics, including national security, nuclear forensics, nuclear astrophysics, and nuclear energy. Precise determination of (n, f) and neutron capture cross sections for reactions in high- ux environments are particularly important for a proper understanding of nuclear reactor performance and stellar nucleosynthesis. In these extreme environments reactions on short-lived and otherwise difficult-to-produce isotopes play a significant role in system evolution and provide insights into the types of nuclear processes taking place; a detailed understanding of these processes is necessary to properly determine cross sections far from stability. Indirect methods are often attempted to measure cross sections on isotopes that are difficult to separate in a laboratory setting. Using the surrogate approach, the same compound nucleus from the reaction of interest is created through a "surrogate" reaction on a different isotope and the resulting decay is measured. This result is combined with appropriate reaction theory for compound nucleus population, from which the desired cross sections can be inferred. This method has shown promise, but the theoretical framework often lacks necessary experimental data to constrain models. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(alpha, alpha'f) reaction - a surrogate for the 239Pu(n, f) - and fission of 35.9(2)MeV at eleven scattering angles from 40° to 140° in 10° intervals and at nuclear excitation energies up to 16MeV. Within experimental uncertainty, the maximum fission probability was observed at the neutron separation energy for each alpha scattering angle. Fission probabilities were separated into five 500 keV bins from 5:5MeV to 8:0MeV and one bin from 4:5MeV to 5:5MeV. Across energy bins the fission probability increases approximately linearly with increasing alpha' scattering angle. At 90° the fission probability increases from 0:069(6) in the lowest energy bin to 0:59(2) in the highest. Likewise, within a single energy bin the fission probability increases with alpha' scattering angle. Within the 6:5MeV and 7:0MeV energy bin, the fission probability increased from 0:41(1) at 60° to 0:81(10) at 140°. Fission fragment angular distributions were also measured integrated over each energy bin. These distributions were fit to theoretical distributions based on combinations of transitional nuclear vibrational and rotational excitations at the saddle point. Contributions from specific K vibrational states were extracted and combined with fission probability measurements to determine the relative fission probability of each state as a function of nuclear excitation energy. Within a given excitation energy bin, it is found that contributions from K states greater than the minimum K = 0 state tend to increase with the increasing alpha' scattering angle. This is attributed to an increase in the transferred angular momentum associated with larger scattering angles. The 90° alpha' scattering angle produced the highest quality results. The relative contributions of K states do not show a discernible trend across the energy spectrum. The energy-binned results confirm existing measurements that place a K = 2 state in the first energy bin with the opening of K = 1 and K = 4 states at energies above 5:5MeV. This experiment represents the first of its kind in which fission probabilities and angular distributions are simultaneously measured at a large number of scattering angles. The acquired fission probability, angular distribution, and K state contribution provide a diverse dataset against which microscopic fission models can be constrained and further the understanding of the properties of the 240Pu fission.

  4. Enhancing the Possibility of Success by Measuring the Probability of Failure in an Educational Program.

    ERIC Educational Resources Information Center

    Brookhart, Susan M.; And Others

    1997-01-01

    Process Analysis is described as a method for identifying and measuring the probability of events that could cause the failure of a program, resulting in a cause-and-effect tree structure of events. The method is illustrated through the evaluation of a pilot instructional program at an elementary school. (SLD)

  5. Optimum measurement for unambiguously discriminating two mixed states: General considerations and special cases

    NASA Astrophysics Data System (ADS)

    Herzog, Ulrike; Bergou, János A.

    2006-04-01

    Based on our previous publication [U. Herzog and J. A. Bergou, Phys. Rev. A 71, 050301(R)(2005)] we investigate the optimum measurement for the unambiguous discrimination of two mixed quantum states that occur with given prior probabilities. Unambiguous discrimination of nonorthogonal states is possible in a probabilistic way, at the expense of a nonzero probability of inconclusive results, where the measurement fails. Along with a discussion of the general problem, we give an example illustrating our method of solution. We also provide general inequalities for the minimum achievable failure probability and discuss in more detail the necessary conditions that must be fulfilled when its absolute lower bound, proportional to the fidelity of the states, can be reached.

  6. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  7. Gambling, Delay, and Probability Discounting in Adults With and Without ADHD.

    PubMed

    Dai, Zhijie; Harrow, Sarah-Eve; Song, Xianwen; Rucklidge, Julia J; Grace, Randolph C

    2016-11-01

    We investigated the relationship between impulsivity, as measured by delay and probability discounting, and gambling-related cognitions and behavior in adults with and without ADHD. Adults who met Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnostic criteria for ADHD (n = 31) and controls (n = 29) were recruited from the community. All completed an interview that included an assessment of psychiatric disorders, gambling questionnaires, and simulated gambling, delay, and probability discounting tasks. The ADHD group was more likely to meet the criteria for problem gambling and was more impulsive than controls based on a composite discounting measure. ADHD symptoms were correlated with gambling-related cognitions and behavior. Probability, but not delay discounting, explained significant variance in gambling-related measures after controlling for ADHD symptoms. Results confirm an association between adult ADHD and gambling, and suggest that the facets of impulsivity related to risk proneness may be an independent risk factor for problem gambling in this population. © The Author(s) 2013.

  8. Threatened species and the potential loss of phylogenetic diversity: conservation scenarios based on estimated extinction probabilities and phylogenetic risk analysis.

    PubMed

    Faith, Daniel P

    2008-12-01

    New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.

  9. Determination of photon emission probabilities for the main gamma-rays of ²²³Ra in equilibrium with its progeny.

    PubMed

    Pibida, L; Zimmerman, B; Fitzgerald, R; King, L; Cessna, J T; Bergeron, D E

    2015-07-01

    The currently published (223)Ra gamma-ray emission probabilities display a wide variation in the values depending on the source of the data. The National Institute of Standards and Technology performed activity measurements on a (223)Ra solution that was used to prepare several sources that were used to determine the photon emission probabilities for the main gamma-rays of (223)Ra in equilibrium with its progeny. Several high purity germanium (HPGe) detectors were used to perform the gamma-ray spectrometry measurements. Published by Elsevier Ltd.

  10. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  11. Autonomous learning derived from experimental modeling of physical laws.

    PubMed

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  13. Adaptation or recovery after health shocks? Evidence using subjective and objective health measures.

    PubMed

    Baji, Petra; Bíró, Anikó

    2018-05-01

    In this paper, we analyse the effect of an onset of a health shock on subjective survival probability and compare it with objective survival probability and self-reported health measures. In particular, we are interested in whether expectations of people respond to health shocks and whether these follow the evolution of objective life expectations and self-reported health measures over time. Using longitudinal data from the Health and Retirement Study, we estimate fixed effects models of adaptation for the objective and subjective survival probabilities and for some self-reported health measures. The results show that after cancer diagnosis, conditional on surviving, both the objective and subjective longevity and self-reported health measures drift back to the before diagnosis trajectories. For stroke and heart attack, in spite of their persistent negative effect on survival, subjective life expectations and self-reported health measures seem to indicate only a transient effect of the health shock. The differences between the objective and subjective measures are in line with the concept of adaptation. We discuss the policy implications of our results. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Probability distributions of continuous measurement results for conditioned quantum evolution

    NASA Astrophysics Data System (ADS)

    Franquet, A.; Nazarov, Yuli V.

    2017-02-01

    We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.

  15. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  16. Transition probabilities of Br II

    NASA Technical Reports Server (NTRS)

    Bengtson, R. D.; Miller, M. H.

    1976-01-01

    Absolute transition probabilities of the three most prominent visible Br II lines are measured in emission. Results compare well with Coulomb approximations and with line strengths extrapolated from trends in homologous atoms.

  17. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  18. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  19. Prospect evaluation as a function of numeracy and probability denominator.

    PubMed

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Measurement of 240Pu Angular Momentum Dependent Fission Probabilities Using the (α ,α') Reaction

    NASA Astrophysics Data System (ADS)

    Koglin, Johnathon; Burke, Jason; Fisher, Scott; Jovanovic, Igor

    2017-09-01

    The surrogate reaction method often lacks the theoretical framework and necessary experimental data to constrain models especially when rectifying differences between angular momentum state differences between the desired and surrogate reaction. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(α ,α' f) reaction - a surrogate for the 239Pu(n , f) - and fission fragment angular distributions. Fission probability measurements were performed at a beam energy of 35.9(2) MeV at eleven scattering angles from 40° to 140°e in 10° intervals and at nuclear excitation energies up to 16 MeV. Fission fragment angular distributions were measured in six bins from 4.5 MeV to 8.0 MeV and fit to expected distributions dependent on the vibrational and rotational excitations at the saddle point. In this way, the contributions to the total fission probability from specific states of K angular momentum projection on the symmetry axis are extracted. A sizable data collection is presented to be considered when constraining microscopic cross section calculations.

  1. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  2. A quantile-based Time at Risk: A new approach for assessing risk in financial markets

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam; Raei, Reza

    2013-11-01

    In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.

  3. The Reliability and Stability of an Inferred Phylogenetic Tree from Empirical Data.

    PubMed

    Katsura, Yukako; Stanley, Craig E; Kumar, Sudhir; Nei, Masatoshi

    2017-03-01

    The reliability of a phylogenetic tree obtained from empirical data is usually measured by the bootstrap probability (Pb) of interior branches of the tree. If the bootstrap probability is high for most branches, the tree is considered to be reliable. If some interior branches show relatively low bootstrap probabilities, we are not sure that the inferred tree is really reliable. Here, we propose another quantity measuring the reliability of the tree called the stability of a subtree. This quantity refers to the probability of obtaining a subtree (Ps) of an inferred tree obtained. We then show that if the tree is to be reliable, both Pb and Ps must be high. We also show that Ps is given by a bootstrap probability of the subtree with the closest outgroup sequence, and computer program RESTA for computing the Pb and Ps values will be presented. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Probability elicitation to inform early health economic evaluations of new medical technologies: a case study in heart failure disease management.

    PubMed

    Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik

    2013-06-01

    Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.

  5. Computer-aided mathematical analysis of probability of intercept for ground-based communication intercept system

    NASA Astrophysics Data System (ADS)

    Park, Sang Chul

    1989-09-01

    We develop a mathematical analysis model to calculate the probability of intercept (POI) for the ground-based communication intercept (COMINT) system. The POI is a measure of the effectiveness of the intercept system. We define the POI as the product of the probability of detection and the probability of coincidence. The probability of detection is a measure of the receiver's capability to detect a signal in the presence of noise. The probability of coincidence is the probability that an intercept system is available, actively listening in the proper frequency band, in the right direction and at the same time that the signal is received. We investigate the behavior of the POI with respect to the observation time, the separation distance, antenna elevations, the frequency of the signal, and the receiver bandwidths. We observe that the coincidence characteristic between the receiver scanning parameters and the signal parameters is the key factor to determine the time to obtain a given POI. This model can be used to find the optimal parameter combination to maximize the POI in a given scenario. We expand this model to a multiple system. This analysis is conducted on a personal computer to provide the portability. The model is also flexible and can be easily implemented under different situations.

  6. Weak Measurement and Quantum Smoothing of a Superconducting Qubit

    NASA Astrophysics Data System (ADS)

    Tan, Dian

    In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.

  7. Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2009-04-17

    We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.

  8. Measures for a multidimensional multiverse

    NASA Astrophysics Data System (ADS)

    Chung, Hyeyoun

    2015-04-01

    We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.

  9. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  10. Tree attenuation at 20 GHz: Foliage effects

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Goldhirsh, Julius

    1993-01-01

    Static tree attenuation measurements at 20 GHz (K-Band) on a 30 deg slant path through a mature Pecan tree with and without leaves showed median fades exceeding approximately 23 dB and 7 dB, respectively. The corresponding 1% probability fades were 43 dB and 25 dB. Previous 1.6 GHz (L-Band) measurements for the bare tree case showed fades larger than those at K-Band by 3.4 dB for the median and smaller by approximately 7 dB at the 1% probability. While the presence of foliage had only a small effect on fading at L-Band (approximately 1 dB additional for the median to 1% probability range), the attenuation increase was significant at K-Band, where it increased by about 17 dB over the same probability range.

  11. Tree attenuation at 20 GHz: Foliage effects

    NASA Astrophysics Data System (ADS)

    Vogel, Wolfhard J.; Goldhirsh, Julius

    1993-08-01

    Static tree attenuation measurements at 20 GHz (K-Band) on a 30 deg slant path through a mature Pecan tree with and without leaves showed median fades exceeding approximately 23 dB and 7 dB, respectively. The corresponding 1% probability fades were 43 dB and 25 dB. Previous 1.6 GHz (L-Band) measurements for the bare tree case showed fades larger than those at K-Band by 3.4 dB for the median and smaller by approximately 7 dB at the 1% probability. While the presence of foliage had only a small effect on fading at L-Band (approximately 1 dB additional for the median to 1% probability range), the attenuation increase was significant at K-Band, where it increased by about 17 dB over the same probability range.

  12. Application of Bayes' theorem for pulse shape discrimination

    NASA Astrophysics Data System (ADS)

    Monterial, Mateusz; Marleau, Peter; Clarke, Shaun; Pozzi, Sara

    2015-09-01

    A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. This allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. A time-correlated measurement of Am-Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.

  13. Application of Bayes' theorem for pulse shape discrimination

    DOE PAGES

    Marleau, Peter; Monterial, Mateusz; Clarke, Shaun; ...

    2015-06-14

    A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less

  14. Invariance of separability probability over reduced states in 4 × 4 bipartite systems

    NASA Astrophysics Data System (ADS)

    Lovas, Attila; Andai, Attila

    2017-07-01

    The geometric separability probability of composite quantum systems has been extensively studied in the recent decades. One of the simplest but strikingly difficult problem is to compute the separability probability of qubit-qubit and rebit-rebit quantum states with respect to the Hilbert-Schmidt measure. A lot of numerical simulations confirm the P{rebit - rebit}=\\frac{29}{64} and P{qubit-qubit}=\\frac{8}{33} conjectured probabilities. We provide a rigorous proof for the separability probability in the real case and we give explicit integral formulas for the complex and quaternionic case. Milz and Strunz studied the separability probability with respect to given subsystems. They conjectured that the separability probability of qubit-qubit (and qubit-qutrit) states of the form of ≤ft(\\begin{array}{@{}cc@{}} D1 & C \\ C* & D2 \\end{array}\\right) depends on D=D1+D2 (on single qubit subsystems), moreover it depends only on the Bloch radii (r) of D and it is constant in r. Using the Peres-Horodecki criterion for separability we give a mathematical proof for the \\frac{29}{64} probability and we present an integral formula for the complex case which hopefully will help to prove the \\frac{8}{33} probability, too. We prove Milz and Strunz’s conjecture for rebit-rebit and qubit-qubit states. The case, when the state space is endowed with the volume form generated by the operator monotone function f(x)=\\sqrt{x} is also studied in detail. We show that even in this setting Milz and Strunz’s conjecture holds true and we give an integral formula for separability probability according to this measure.

  15. Prospective risk factors for new-onset post-traumatic stress disorder in National Guard soldiers deployed to Iraq.

    PubMed

    Polusny, M A; Erbes, C R; Murdoch, M; Arbisi, P A; Thuras, P; Rath, M B

    2011-04-01

    National Guard troops are at increased risk for post-traumatic stress disorder (PTSD); however, little is known about risk and resilience in this population. The Readiness and Resilience in National Guard Soldiers Study is a prospective, longitudinal investigation of 522 Army National Guard troops deployed to Iraq from March 2006 to July 2007. Participants completed measures of PTSD symptoms and potential risk/protective factors 1 month before deployment. Of these, 81% (n=424) completed measures of PTSD, deployment stressor exposure and post-deployment outcomes 2-3 months after returning from Iraq. New onset of probable PTSD 'diagnosis' was measured by the PTSD Checklist - Military (PCL-M). Independent predictors of new-onset probable PTSD were identified using hierarchical logistic regression analyses. At baseline prior to deployment, 3.7% had probable PTSD. Among soldiers without PTSD symptoms at baseline, 13.8% reported post-deployment new-onset probable PTSD. Hierarchical logistic regression adjusted for gender, age, race/ethnicity and military rank showed that reporting more stressors prior to deployment predicted new-onset probable PTSD [odds ratio (OR) 2.20] as did feeling less prepared for deployment (OR 0.58). After accounting for pre-deployment factors, new-onset probable PTSD was predicted by exposure to combat (OR 2.19) and to combat's aftermath (OR 1.62). Reporting more stressful life events after deployment (OR 1.96) was associated with increased odds of new-onset probable PTSD, while post-deployment social support (OR 0.31) was a significant protective factor in the etiology of PTSD. Combat exposure may be unavoidable in military service members, but other vulnerability and protective factors also predict PTSD and could be targets for prevention strategies.

  16. Probability shapes perceptual precision: A study in orientation estimation.

    PubMed

    Jabar, Syaheed B; Anderson, Britt

    2015-12-01

    Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).

  17. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  18. Kolmogorov complexity, statistical regularization of inverse problems, and Birkhoff's formalization of beauty

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha

    1998-09-01

    Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.

  19. Calibrating perceived understanding and competency in probability concepts: A diagnosis of learning difficulties based on Rasch probabilistic model

    NASA Astrophysics Data System (ADS)

    Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md

    2015-12-01

    Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.

  20. Going through a quantum phase

    NASA Technical Reports Server (NTRS)

    Shapiro, Jeffrey H.

    1992-01-01

    Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.

  1. A Web-based interface to calculate phonotactic probability for words and nonwords in English

    PubMed Central

    VITEVITCH, MICHAEL S.; LUCE, PAUL A.

    2008-01-01

    Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html. PMID:15641436

  2. Public attitudes toward stuttering in Turkey: probability versus convenience sampling.

    PubMed

    Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun

    2011-12-01

    A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. On Probability Domains IV

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  4. Exploiting target amplitude information to improve multi-target tracking

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Blair, W. Dale

    2006-05-01

    Closely-spaced (but resolved) targets pose a challenge for measurement-to-track data association algorithms. Since the Mahalanobis distances between measurements collected on closely-spaced targets and tracks are similar, several elements of the corresponding kinematic measurement-to-track cost matrix are also similar. Lacking any other information on which to base assignments, it is not surprising that data association algorithms make mistakes. One ad hoc approach for mitigating this problem is to multiply the kinematic measurement-to-track likelihoods by amplitude likelihoods. However, this can actually be detrimental to the measurement-to-track association process. With that in mind, this paper pursues a rigorous treatment of the hypothesis probabilities for kinematic measurements and features. Three simple scenarios are used to demonstrate the impact of basing data association decisions on these hypothesis probabilities for Rayleigh, fixed-amplitude, and Rician targets. The first scenario assumes that the tracker carries two tracks but only one measurement is collected. This provides insight into more complex scenarios in which there are fewer measurements than tracks. The second scenario includes two measurements and one track. This extends naturally to the case with more measurements than tracks. Two measurements and two tracks are present in the third scenario, which provides insight into the performance of this method when the number of measurements equals the number of tracks. In all cases, basing data association decisions on the hypothesis probabilities leads to good results.

  5. Absolute Transition Probabilities of Lines in the Spectra of Astrophysical Atoms, Molecules, and Ions

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, P. L.; Yoshino, K.

    1984-01-01

    Progress in the investigation of absolute transition probabilities (A-values or F values) for ultraviolet lines is reported. A radio frequency ion trap was used for measurement of transition probabilities for intersystem lines seen in astronomical spectra. The intersystem line at 2670 A in Al II, which is seen in pre-main sequence stars and symbiotic stars, was studied.

  6. Lexicographic Probability, Conditional Probability, and Nonstandard Probability

    DTIC Science & Technology

    2009-11-11

    1994; Hammond 1999; Kohlberg and Reny 1997; Kreps and Wilson 1982; Myerson 1986; Selten 1965; Selten 1975]). It also arises in the analysis of...sets of measure 0): BBD considered three; Kohlberg and Reny [1997] considered two others. It turns out that these notions are perhaps best understood...number of characterizations of solution concepts depend on independence (see, for example, [Battigalli 1996; Kohlberg and Reny 1997; Battigalli and

  7. Coherent nature of the radiation emitted in delayed luminescence of leaves

    PubMed

    Bajpai

    1999-06-07

    After exposure to light, a living system emits a photon signal of characteristic shape. The signal has a small decay region and a long tail region. The flux of photons in the decay region changes by 2 to 3 orders of magnitude, but remains almost constant in the tail region. The decaying part is attributed to delayed luminescence and the constant part to ultra-weak luminescence. Biophoton emission is the common name given to both kinds of luminescence, and photons emitted are called biophotons. The decay character of the biophoton signal is not exponential, which is suggestive of a coherent signal. We sought to establish the coherent nature by measuring the conditional probability of zero photon detection in a small interval Delta. Our measurements establish the coherent nature of biophotons emitted by different leaves at various temperatures in the range 15-50 degrees C. Our set up could measure the conditional probability for Delta

  8. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  9. Relations between the single-pass and double-pass transition probabilities in quantum systems with two and three states

    NASA Astrophysics Data System (ADS)

    Vitanov, Nikolay V.

    2018-05-01

    In the experimental determination of the population transfer efficiency between discrete states of a coherently driven quantum system it is often inconvenient to measure the population of the target state. Instead, after the interaction that transfers the population from the initial state to the target state, a second interaction is applied which brings the system back to the initial state, the population of which is easy to measure and normalize. If the transition probability is p in the forward process, then classical intuition suggests that the probability to return to the initial state after the backward process should be p2. However, this classical expectation is generally misleading because it neglects interference effects. This paper presents a rigorous theoretical analysis based on the SU(2) and SU(3) symmetries of the propagators describing the evolution of quantum systems with two and three states, resulting in explicit analytic formulas that link the two-step probabilities to the single-step ones. Explicit examples are given with the popular techniques of rapid adiabatic passage and stimulated Raman adiabatic passage. The present results suggest that quantum-mechanical probabilities degrade faster in repeated processes than classical probabilities. Therefore, the actual single-pass efficiencies in various experiments, calculated from double-pass probabilities, might have been greater than the reported values.

  10. Design of a High Intensity Turbulent Combustion System

    DTIC Science & Technology

    2015-05-01

    nth repetition of a turbulent-flow experiment. [1] .................... 8 Figure 2. 3: Velocity measurement on the n th repetition of a turbulent-flow...measurement on the n th repetition of a turbulent-flow experiment. u(t) = U + u’(t...event such as P ≈ [ U < N ms-1 ]. The random variable U can be characterized by its probability density function (PDF). The probability of an event

  11. Statistical hydrodynamics and related problems in spaces of probability measures

    NASA Astrophysics Data System (ADS)

    Dostoglou, Stamatios

    2017-11-01

    A rigorous theory of statistical solutions of the Navier-Stokes equations, suitable for exploring Kolmogorov's ideas, has been developed by M.I. Vishik and A.V. Fursikov, culminating in their monograph "Mathematical problems of Statistical Hydromechanics." We review some progress made in recent years following this approach, with emphasis on problems concerning the correlation of velocities and corresponding questions in the space of probability measures on Hilbert spaces.

  12. Origin of probabilities and their application to the multiverse

    NASA Astrophysics Data System (ADS)

    Albrecht, Andreas; Phillips, Daniel

    2014-12-01

    We argue using simple models that all successful practical uses of probabilities originate in quantum fluctuations in the microscopic physical world around us, often propagated to macroscopic scales. Thus we claim there is no physically verified fully classical theory of probability. We comment on the general implications of this view, and specifically question the application of purely classical probabilities to cosmology in cases where key questions are known to have no quantum answer. We argue that the ideas developed here may offer a way out of the notorious measure problems of eternal inflation.

  13. Scale-Invariant Transition Probabilities in Free Word Association Trajectories

    PubMed Central

    Costa, Martin Elias; Bonomo, Flavia; Sigman, Mariano

    2009-01-01

    Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16% of order-2 cycles) implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼7 steps) which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution. PMID:19826622

  14. Assessing agreement with relative area under the coverage probability curve.

    PubMed

    Barnhart, Huiman X

    2016-08-15

    There has been substantial statistical literature in the last several decades on assessing agreement, and coverage probability approach was selected as a preferred index for assessing and improving measurement agreement in a core laboratory setting. With this approach, a satisfactory agreement is based on pre-specified high satisfactory coverage probability (e.g., 95%), given one pre-specified acceptable difference. In practice, we may want to have quality control on more than one pre-specified differences, or we may simply want to summarize the agreement based on differences up to a maximum acceptable difference. We propose to assess agreement via the coverage probability curve that provides a full spectrum of measurement error at various differences/disagreement. Relative area under the coverage probability curve is proposed for the summary of overall agreement, and this new summary index can be used for comparison of different intra-methods or inter-methods/labs/observers' agreement. Simulation studies and a blood pressure example are used for illustration of the methodology. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems

    NASA Astrophysics Data System (ADS)

    Une, Masashi; Otsuka, Akira; Imai, Hideki

    This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.

  16. Equidistribution for Nonuniformly Expanding Dynamical Systems, and Application to the Almost Sure Invariance Principle

    NASA Astrophysics Data System (ADS)

    Korepanov, Alexey

    2017-12-01

    Let {T : M \\to M} be a nonuniformly expanding dynamical system, such as logistic or intermittent map. Let {v : M \\to R^d} be an observable and {v_n = \\sum_{k=0}^{n-1} v circ T^k} denote the Birkhoff sums. Given a probability measure {μ} on M, we consider v n as a discrete time random process on the probability space {(M, μ)} . In smooth ergodic theory there are various natural choices of {μ} , such as the Lebesgue measure, or the absolutely continuous T-invariant measure. They give rise to different random processes. We investigate relation between such processes. We show that in a large class of measures, it is possible to couple (redefine on a new probability space) every two processes so that they are almost surely close to each other, with explicit estimates of "closeness". The purpose of this work is to close a gap in the proof of the almost sure invariance principle for nonuniformly hyperbolic transformations by Melbourne and Nicol.

  17. Why anthropic reasoning cannot predict Lambda.

    PubMed

    Starkman, Glenn D; Trotta, Roberto

    2006-11-17

    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.

  18. K β to K α X-ray intensity ratios and K to L shell vacancy transfer probabilities of Co, Ni, Cu, and Zn

    NASA Astrophysics Data System (ADS)

    Anand, L. F. M.; Gudennavar, S. B.; Bubbly, S. G.; Kerur, B. R.

    2015-12-01

    The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K β to K α intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak 137Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others' work, establishing a good agreement.

  19. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  20. Astrometry in the globular cluster M13. II. Membership probabilities from old proper motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cudworth, K.

    Astrometric cluster membership probabilities have been derived from proper motions measured by other authors for stars in the region of the globular cluster M13. Several stars of individual interest are discussed.

  1. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    PubMed

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  2. Volume-weighted measure for eternal inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winitzki, Sergei

    2008-08-15

    I propose a new volume-weighted probability measure for cosmological 'multiverse' scenarios involving eternal inflation. The 'reheating-volume (RV) cutoff' calculates the distribution of observable quantities on a portion of the reheating hypersurface that is conditioned to be finite. The RV measure is gauge-invariant, does not suffer from the 'youngness paradox', and is independent of initial conditions at the beginning of inflation. In slow-roll inflationary models with a scalar inflaton, the RV-regulated probability distributions can be obtained by solving nonlinear diffusion equations. I discuss possible applications of the new measure to 'landscape' scenarios with bubble nucleation. As an illustration, I compute themore » predictions of the RV measure in a simple toy landscape.« less

  3. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  4. Wide-field Fourier ptychographic microscopy using laser illumination source

    PubMed Central

    Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei

    2016-01-01

    Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016

  5. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  6. On the Possibility to Combine the Order Effect with Sequential Reproducibility for Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei

    2015-10-01

    In this paper we study the problem of a possibility to use quantum observables to describe a possible combination of the order effect with sequential reproducibility for quantum measurements. By the order effect we mean a dependence of probability distributions (of measurement results) on the order of measurements. We consider two types of the sequential reproducibility: adjacent reproducibility (A-A) (the standard perfect repeatability) and separated reproducibility(A-B-A). The first one is reproducibility with probability 1 of a result of measurement of some observable A measured twice, one A measurement after the other. The second one, A-B-A, is reproducibility with probability 1 of a result of A measurement when another quantum observable B is measured between two A's. Heuristically, it is clear that the second type of reproducibility is complementary to the order effect. We show that, surprisingly, this may not be the case. The order effect can coexist with a separated reproducibility as well as adjacent reproducibility for both observables A and B. However, the additional constraint in the form of separated reproducibility of the B-A-B type makes this coexistence impossible. The problem under consideration was motivated by attempts to apply the quantum formalism outside of physics, especially, in cognitive psychology and psychophysics. However, it is also important for foundations of quantum physics as a part of the problem about the structure of sequential quantum measurements.

  7. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  8. BODY SENSING SYSTEM

    NASA Technical Reports Server (NTRS)

    Mah, Robert W. (Inventor)

    2005-01-01

    System and method for performing one or more relevant measurements at a target site in an animal body, using a probe. One or more of a group of selected internal measurements is performed at the target site, is optionally combined with one or more selected external measurements, and is optionally combined with one or more selected heuristic information items, in order to reduce to a relatively small number the probable medical conditions associated with the target site. One or more of the internal measurements is optionally used to navigate the probe to the target site. Neural net information processing is performed to provide a reduced set of probable medical conditions associated with the target site.

  9. A quantum measure of the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilenkin, Alexander, E-mail: vilenkin@cosmos.phy.tufts.edu

    2014-05-01

    It has been recently suggested that probabilities of different events in the multiverse are given by the frequencies at which these events are encountered along the worldline of a geodesic observer (the ''watcher''). Here I discuss an extension of this probability measure to quantum theory. The proposed extension is gauge-invariant, as is the classical version of this measure. Observations of the watcher are described by a reduced density matrix, and the frequencies of events can be found using the decoherent histories formalism of Quantum Mechanics (adapted to open systems). The quantum watcher measure makes predictions in agreement with the standardmore » Born rule of QM.« less

  10. Probabilistic teleportation via multi-parameter measurements and partially entangled states

    NASA Astrophysics Data System (ADS)

    Wei, Jiahua; Shi, Lei; Han, Chen; Xu, Zhiyan; Zhu, Yu; Wang, Gang; Wu, Hao

    2018-04-01

    In this paper, a novel scheme for probabilistic teleportation is presented with multi-parameter measurements via a non-maximally entangled state. This is in contrast to the fact that the measurement kinds for quantum teleportation are usually particular in most previous schemes. The detail implementation producers for our proposal are given by using of appropriate local unitary operations. Moreover, the total success probability and classical information of this proposal are calculated. It is demonstrated that the success probability and classical cost would be changed with the multi-measurement parameters and the entanglement factor of quantum channel. Our scheme could enlarge the research range of probabilistic teleportation.

  11. p-adic stochastic hidden variable model

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrew

    1998-03-01

    We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.

  12. Research on quantitative relationship between NIIRS and the probabilities of discrimination

    NASA Astrophysics Data System (ADS)

    Bai, Honggang

    2011-08-01

    There are a large number of electro-optical (EO) and infrared (IR) sensors used on military platforms including ground vehicle, low altitude air vehicle, high altitude air vehicle, and satellite systems. Ground vehicle and low-altitude air vehicle (rotary and fixed-wing aircraft) sensors typically use the probabilities of discrimination (detection, recognition, and identification) as design requirements and system performance indicators. High-altitude air vehicles and satellite sensors have traditionally used the National Imagery Interpretation Rating Scale (NIIRS) performance measures for guidance in design and measures of system performance. Recently, there has a large effort to make strategic sensor information available to tactical forces or make the information of targets acquisition can be used by strategic systems. In this paper, the two techniques about the probabilities of discrimination and NIIRS for sensor design are presented separately. For the typical infrared remote sensor design parameters, the function of the probability of recognition and NIIRS scale as the distance R is given to Standard NATO Target and M1Abrams two different size targets based on the algorithm of predicting the field performance and NIIRS. For Standard NATO Target, M1Abrams, F-15, and B-52 four different size targets, the conversion from NIIRS to the probabilities of discrimination are derived and calculated, and the similarities and differences between NIIRS and the probabilities of discrimination are analyzed based on the result of calculation. Comparisons with preliminary calculation results show that the conversion between NIIRS and the probabilities of discrimination is probable although more validation experiments are needed.

  13. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  14. Determination of photon emission probability for the main gamma ray and half-life measurements of 64Cu.

    PubMed

    Pibida, L; Zimmerman, B; Bergeron, D E; Fitzgerald, R; Cessna, J T; King, L

    2017-11-01

    The National Institute of Standards and Technology (NIST) performed new standardization measurements for 64 Cu. As part of this work the photon emission probability for the main gamma-ray line and the half-life were determined using several high-purity germanium (HPGe) detectors. Half-life determinations were also carried out with a NaI(Tl) well counter and two pressurized ionization chambers. Published by Elsevier Ltd.

  15. Quantum fluctuation theorems and generalized measurements during the force protocol.

    PubMed

    Watanabe, Gentaro; Venkatesh, B Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010); Phys. Rev. E 83, 041114 (2011)], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  16. Disruptive effects of prefeeding and haloperidol administration on multiple measures of food-maintained behavior in rats

    PubMed Central

    Hayashi, Yusuke; Wirth, Oliver

    2015-01-01

    Four rats responded under a choice reaction-time procedure. At the beginning of each trial, the rats were required to hold down a center lever for a variable duration, release it following a high- or low-pitched tone, and press either a left or right lever, conditionally on the tone. Correct choices were reinforced with a probability of .95 or .05 under blinking or static houselights, respectively. After performance stabilized, disruptive effects of free access to food pellets prior to sessions (prefeeding) and intraperitoneal injection of haloperidol were examined on multiple behavioral measures (i.e., the number of trials completed, percent of correct responses, and reaction time). Resistance to prefeeding depended on the probability of food delivery for the number of trials completed and reaction time. Resistance to haloperidol, on the other hand, was not systematically affected by the probability of food delivery for all dependent measures. PMID:22209910

  17. Structural Features of Algebraic Quantum Notations

    ERIC Educational Resources Information Center

    Gire, Elizabeth; Price, Edward

    2015-01-01

    The formalism of quantum mechanics includes a rich collection of representations for describing quantum systems, including functions, graphs, matrices, histograms of probabilities, and Dirac notation. The varied features of these representations affect how computations are performed. For example, identifying probabilities of measurement outcomes…

  18. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  19. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-07-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept. The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation. No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover. For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of -9.0%, -21%, -8.6%, 17.8%, 3.6%, and -2.3%. This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  20. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    USGS Publications Warehouse

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-01-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept.The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation.No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover.For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of −9.0%, −21%, −8.6%, 17.8%, 3.6%, and −2.3%.This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  1. Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification

    PubMed Central

    Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang

    2016-01-01

    Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975

  2. Methodology for assessing the probability of corrosion in concrete structures on the basis of half-cell potential and concrete resistivity measurements.

    PubMed

    Sadowski, Lukasz

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  3. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  4. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  5. The Influence of Optical Coherence Tomography Measurements of Retinal Nerve Fiber Layer on Decision-Making in Glaucoma Diagnosis.

    PubMed

    Fu, Lanxing; Aspinall, Peter; Bennett, Gary; Magidson, Jay; Tatham, Andrew J

    2017-04-01

    To quantify the influence of spectral domain optical coherence tomography (SDOCT) on decision-making in patients with suspected glaucoma. A prospective cross-sectional study involving 40 eyes of 20 patients referred by community optometrists due to suspected glaucoma. All patients had disc photographs and standard automated perimetry (SAP), and results were presented to 13 ophthalmologists who estimated pre-test probability of glaucoma (0-100%) for a total of 520 observations. Ophthalmologists were then permitted to modify probabilities of disease based on SDOCT retinal nerve fiber layer (RNFL) measurements (post-test probability). The effect of information from SDOCT on decision to treat, monitor, or discharge was assessed. Agreement among graders was assessed using intraclass correlation coefficients (ICC) and correlated component regression (CCR) was used to identify variables influencing management decisions. Patients had an average age of 69.0 ± 10.1 years, SAP mean deviation of 2.71 ± 3.13 dB, and RNFL thickness of 86.2 ± 16.7 μm. Average pre-test probability of glaucoma was 37.0 ± 33.6% with SDOCT resulting in a 13.3 ± 18.1% change in estimated probability. Incorporating information from SDOCT improved agreement regarding probability of glaucoma (ICC = 0.50 (95% CI 0.38 to 0.64) without SDOCT versus 0.64 (95% CI 0.52 to 0.76) with SDOCT). SDOCT led to a change from decision to "treat or monitor" to "discharge" in 22 of 520 cases and a change from "discharge" to "treat or monitor" in 11 of 520 cases. Pre-test probability and RNFL thickness were predictors of post-test probability of glaucoma, contributing 69 and 31% of the variance in post-test probability, respectively. Information from SDOCT altered estimated probability of glaucoma and improved agreement among clinicians in those suspected of having the disease.

  6. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  7. Development of spatial-temporal ventilation heterogeneity and probability analysis tools for hyperpolarized 3He magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Choy, S.; Ahmed, H.; Wheatley, A.; McCormack, D. G.; Parraga, G.

    2010-03-01

    We developed image analysis tools to evaluate spatial and temporal 3He magnetic resonance imaging (MRI) ventilation in asthma and cystic fibrosis. We also developed temporal ventilation probability maps to provide a way to describe and quantify ventilation heterogeneity over time, as a way to test respiratory exacerbations or treatment predictions and to provide a discrete probability measurement of 3He ventilation defect persistence.

  8. K{sub β} to K{sub α} X-ray intensity ratios and K to L shell vacancy transfer probabilities of Co, Ni, Cu, and Zn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anand, L. F. M.; Gudennavar, S. B., E-mail: shivappa.b.gudennavar@christuniversity.in; Bubbly, S. G.

    The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K{sub β} to K{sub α} intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak {sup 137}Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others’ work, establishing a good agreement.

  9. BANYAN_Sigma: Bayesian classifier for members of young stellar associations

    NASA Astrophysics Data System (ADS)

    Gagné, Jonathan; Mamajek, Eric E.; Malo, Lison; Riedel, Adric; Rodriguez, David; Lafrenière, David; Faherty, Jacqueline K.; Roy-Loubier, Olivier; Pueyo, Laurent; Robin, Annie C.; Doyon, René

    2018-01-01

    BANYAN_Sigma calculates the membership probability that a given astrophysical object belongs to one of the currently known 27 young associations within 150 pc of the Sun, using Bayesian inference. This tool uses the sky position and proper motion measurements of an object, with optional radial velocity (RV) and distance (D) measurements, to derive a Bayesian membership probability. By default, the priors are adjusted such that a probability threshold of 90% will recover 50%, 68%, 82% or 90% of true association members depending on what observables are input (only sky position and proper motion, with RV, with D, with both RV and D, respectively). The algorithm is implemented in a Python package, in IDL, and is also implemented as an interactive web page.

  10. Bayesian learning

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.

  11. Teleportation of entangled states without Bell-state measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Wesley B.; Baseia, B.; Avelar, A.T.

    2005-10-15

    In a recent paper [Phys. Rev. A 70, 025803 (2004)] we presented a scheme to teleport an entanglement of zero- and one-photon states from a bimodal cavity to another one, with 100% success probability. Here, inspired by recent results in the literature, we have modified our previous proposal to teleport the same entangled state without using Bell-state measurements. For comparison, the time spent, the fidelity, and the success probability for this teleportation are considered.

  12. Transition probabilities of Ce I obtained from Boltzmann analysis of visible and near-infrared emission spectra

    NASA Astrophysics Data System (ADS)

    Nitz, D. E.; Curry, J. J.; Buuck, M.; DeMann, A.; Mitchell, N.; Shull, W.

    2018-02-01

    We report radiative transition probabilities for 5029 emission lines of neutral cerium within the wavelength range 417-1110 nm. Transition probabilities for only 4% of these lines have been previously measured. These results are obtained from a Boltzmann analysis of two high resolution Fourier transform emission spectra used in previous studies of cerium, obtained from the digital archives of the National Solar Observatory at Kitt Peak. The set of transition probabilities used for the Boltzmann analysis are those published by Lawler et al (2010 J. Phys. B: At. Mol. Opt. Phys. 43 085701). Comparisons of branching ratios and transition probabilities for lines common to the two spectra provide important self-consistency checks and test for the presence of self-absorption effects. Estimated 1σ uncertainties for our transition probability results range from 10% to 18%.

  13. Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.

    PubMed

    Allen, Jeff; Ghattas, Andrew

    2016-06-01

    Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.

  14. A discussion on the origin of quantum probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less

  15. Bayesian statistics in radionuclide metrology: measurement of a decaying source

    NASA Astrophysics Data System (ADS)

    Bochud, François O.; Bailat, Claude J.; Laedermann, Jean-Pascal

    2007-08-01

    The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation.

  16. Optimal minimal measurements of mixed states

    NASA Astrophysics Data System (ADS)

    Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.

    1999-07-01

    The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.

  17. Parallel Low-Loss Measurement of Multiple Atomic Qubits

    NASA Astrophysics Data System (ADS)

    Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.

    2017-11-01

    We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.

  18. Relationship between Adolescent Risk Preferences on a Laboratory Task and Behavioral Measures of Risk-taking

    PubMed Central

    Rao, Uma; Sidhartha, Tanuj; Harker, Karen R.; Bidesi, Anup S.; Chen, Li-Ann; Ernst, Monique

    2010-01-01

    Purpose The goal of the study was to assess individual differences in risk-taking behavior among adolescents in the laboratory. A second aim was to evaluate whether the laboratory-based risk-taking behavior is associated with other behavioral and psychological measures associated with risk-taking behavior. Methods Eighty-two adolescents with no personal history of psychiatric disorder completed a computerized decision-making task, the Wheel of Fortune (WOF). By offering choices between clearly defined probabilities and real monetary outcomes, this task assesses risk preferences when participants are confronted with potential rewards and losses. The participants also completed a variety of behavioral and psychological measures associated with risk-taking behavior. Results Performance on the task varied based on the probability and anticipated outcomes. In the winning sub-task, participants selected low probability-high magnitude reward (high-risk choice) less frequently than high probability-low magnitude reward (low-risk choice). In the losing sub-task, participants selected low probability-high magnitude loss more often than high probability-low magnitude loss. On average, the selection of probabilistic rewards was optimal and similar to performance in adults. There were, however, individual differences in performance, and one-third of the adolescents made high-risk choice more frequently than low-risk choice while selecting a reward. After controlling for sociodemographic and psychological variables, high-risk choice on the winning task predicted “real-world” risk-taking behavior and substance-related problems. Conclusions These findings highlight individual differences in risk-taking behavior. Preliminary data on face validity of the WOF task suggest that it might be a valuable laboratory tool for studying behavioral and neurobiological processes associated with risk-taking behavior in adolescents. PMID:21257113

  19. Urban sprawl and delayed ambulance arrival in the U.S.

    PubMed

    Trowbridge, Matthew J; Gurka, Matthew J; O'Connor, Robert E

    2009-11-01

    Minimizing emergency medical service (EMS) response time is a central objective of prehospital care, yet the potential influence of built environment features such as urban sprawl on EMS system performance is often not considered. This study measures the association between urban sprawl and EMS response time to test the hypothesis that features of sprawling development increase the probability of delayed ambulance arrival. In 2008, EMS response times for 43,424 motor-vehicle crashes were obtained from the Fatal Analysis Reporting System, a national census of crashes involving > or =1 fatality. Sprawl at each crash location was measured using a continuous county-level index previously developed by Ewing et al. The association between sprawl and the probability of a delayed ambulance arrival (> or =8 minutes) was then measured using generalized linear mixed modeling to account for correlation among crashes from the same county. Urban sprawl is significantly associated with increased EMS response time and a higher probability of delayed ambulance arrival (p=0.03). This probability increases quadratically as the severity of sprawl increases while controlling for nighttime crash occurrence, road conditions, and presence of construction. For example, in sprawling counties (e.g., Fayette County GA), the probability of a delayed ambulance arrival for daytime crashes in dry conditions without construction was 69% (95% CI=66%, 72%) compared with 31% (95% CI=28%, 35%) in counties with prominent smart-growth characteristics (e.g., Delaware County PA). Urban sprawl is significantly associated with increased EMS response time and a higher probability of delayed ambulance arrival following motor-vehicle crashes in the U.S. The results of this study suggest that promotion of community design and development that follows smart-growth principles and regulates urban sprawl may improve EMS performance and reliability.

  20. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus

    PubMed Central

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-01-01

    Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455

  1. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus.

    PubMed

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-05-01

    DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.

  2. Applications of the first digit law to measure correlations.

    PubMed

    Gramm, R; Yost, J; Su, Q; Grobe, R

    2017-04-01

    The quasiempirical Benford law predicts that the distribution of the first significant digit of random numbers obtained from mixed probability distributions is surprisingly meaningful and reveals some universal behavior. We generalize this finding to examine the joint first-digit probability of a pair of two random numbers and show that undetectable correlations by means of the usual covariance-based measure can be identified in the statistics of the corresponding first digits. We illustrate this new measure by analyzing the correlations and anticorrelations of the positions of two interacting particles in their quantum mechanical ground state. This suggests that by using this measure, the presence or absence of correlations can be determined even if only the first digit of noisy experimental data can be measured accurately.

  3. Improved Measures of Integrated Information

    PubMed Central

    Tegmark, Max

    2016-01-01

    Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. PMID:27870846

  4. Metrics for More than Two Points at Once

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    The conventional definition of a topological metric over a space specifies properties that must be obeyed by any measure of "how separated" two points in that space are. Here it is shown how to extend that definition, and in particular the triangle inequality, to concern arbitrary numbers of points. Such a measure of how separated the points within a collection are can be bootstrapped, to measure "how separated" from each other are two (or more) collections. The measure presented here also allows fractional membership of an element in a collection. This means it directly concerns measures of "how spread out" a probability distribution over a space is. When such a measure is bootstrapped to compare two collections, it allows us to measure how separated two probability distributions are, or more generally, how separated a distribution of distributions is.

  5. Effects of delay and probability combinations on discounting in humans

    PubMed Central

    Cox, David J.; Dallery, Jesse

    2017-01-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073

  6. A monogamy-of-entanglement game with applications to device-independent quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie

    2013-10-01

    We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.

  7. The reaction probability of N2O5 with sulfuric acid aerosols at stratospheric temperatures and compositions

    NASA Technical Reports Server (NTRS)

    Fried, Alan; Henry, Bruce E.; Calvert, Jack G.; Mozurkewich, Michael

    1994-01-01

    We have measured the rate of reaction of N2O5 with H2O on monodisperse, submicrometer H2SO4 particles in a low-temperature flow reactor. Measurements were carried out at temperatures between 225 K and 293 K on aerosol particles with sizes and compositions comparable to those found in the stratosphere. At 273 K, the reaction probability was found to be 0.103 +/- 0.0006, independent of H2SO4 composition from 64 to 81 wt%. At 230 K, the reaction probability increased from 0.077 for compositions near 60% H2SO4 to 0.146 for compositions near 70% H2SO4. Intermediate conditions gave intermediate results except for low reaction probabilities of about 0.045 at 260 K on aerosols with about 78% H2SO4. The reaction probability did not depend on particle size. These results imply that the reaction occurs essentially at the surface of the particle. A simple model for this type of reaction that reproduces the general trends observed is presented. the presence of formaldehyde did not affect the reaction rate.

  8. On the extinction probability in models of within-host infection: the role of latency and immunity.

    PubMed

    Yan, Ada W C; Cao, Pengxing; McCaw, James M

    2016-10-01

    Not every exposure to virus establishes infection in the host; instead, the small amount of initial virus could become extinct due to stochastic events. Different diseases and routes of transmission have a different average number of exposures required to establish an infection. Furthermore, the host immune response and antiviral treatment affect not only the time course of the viral load provided infection occurs, but can prevent infection altogether by increasing the extinction probability. We show that the extinction probability when there is a time-dependent immune response depends on the chosen form of the model-specifically, on the presence or absence of a delay between infection of a cell and production of virus, and the distribution of latent and infectious periods of an infected cell. We hypothesise that experimentally measuring the extinction probability when the virus is introduced at different stages of the immune response, alongside the viral load which is usually measured, will improve parameter estimates and determine the most suitable mathematical form of the model.

  9. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia.

    PubMed

    Mavromoustakos, Elena; Clark, Gavin I; Rock, Adam J

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined.

  10. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia

    PubMed Central

    Mavromoustakos, Elena; Clark, Gavin I.; Rock, Adam J.

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined. PMID:27557054

  11. The role of the uncertainty of measurement of serum creatinine concentrations in the diagnosis of acute kidney injury.

    PubMed

    Kin Tekce, Buket; Tekce, Hikmet; Aktas, Gulali; Uyeturk, Ugur

    2016-01-01

    Uncertainty of measurement is the numeric expression of the errors associated with all measurements taken in clinical laboratories. Serum creatinine concentration is the most common diagnostic marker for acute kidney injury. The goal of this study was to determine the effect of the uncertainty of measurement of serum creatinine concentrations on the diagnosis of acute kidney injury. We calculated the uncertainty of measurement of serum creatinine according to the Nordtest Guide. Retrospectively, we identified 289 patients who were evaluated for acute kidney injury. Of the total patient pool, 233 were diagnosed with acute kidney injury using the AKIN classification scheme and then were compared using statistical analysis. We determined nine probabilities of the uncertainty of measurement of serum creatinine concentrations. There was a statistically significant difference in the number of patients diagnosed with acute kidney injury when uncertainty of measurement was taken into consideration (first probability compared to the fifth p = 0.023 and first probability compared to the ninth p = 0.012). We found that the uncertainty of measurement for serum creatinine concentrations was an important factor for correctly diagnosing acute kidney injury. In addition, based on the AKIN classification scheme, minimizing the total allowable error levels for serum creatinine concentrations is necessary for the accurate diagnosis of acute kidney injury by clinicians.

  12. Recent trends in the probability of high out-of-pocket medical expenses in the United States

    PubMed Central

    Baird, Katherine E

    2016-01-01

    Objective: This article measures the probability that out-of-pocket expenses in the United States exceed a threshold share of income. It calculates this probability separately by individuals’ health condition, income, and elderly status and estimates changes occurring in these probabilities between 2010 and 2013. Data and Method: This article uses nationally representative household survey data on 344,000 individuals. Logistic regressions estimate the probabilities that out-of-pocket expenses exceed 5% and alternatively 10% of income in the two study years. These probabilities are calculated for individuals based on their income, health status, and elderly status. Results: Despite favorable changes in both health policy and the economy, large numbers of Americans continue to be exposed to high out-of-pocket expenditures. For instance, the results indicate that in 2013 over a quarter of nonelderly low-income citizens in poor health spent 10% or more of their income on out-of-pocket expenses, and over 40% of this group spent more than 5%. Moreover, for Americans as a whole, the probability of spending in excess of 5% of income on out-of-pocket costs increased by 1.4 percentage points between 2010 and 2013, with the largest increases occurring among low-income Americans; the probability of Americans spending more than 10% of income grew from 9.3% to 9.6%, with the largest increases also occurring among the poor. Conclusion: The magnitude of out-of-pocket’s financial burden and the most recent upward trends in it underscore a need to develop good measures of the degree to which health care policy exposes individuals to financial risk, and to closely monitor the Affordable Care Act’s success in reducing Americans’ exposure to large medical bills. PMID:27651901

  13. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  14. Physical deposit measures and commercial potential: The case of titanium-bearing heavy-mineral deposits

    USGS Publications Warehouse

    Attanasi, E.D.; DeYoung, J.H.

    1988-01-01

    Physical measures of mineral deposit characteristics, such as grade and tonnage, long have been used in both subjective and analytic models to predict favorability of areas for the occurrence of mineral deposits of particular types. After a deposit has been identified, however, the explorationist must decide whether to continue data collection, begin an economic feasibility study, or abandon the prospect. The decision maker can estimate the probability that a deposit will be commercial by examining physical measures. The amount of sampling data required before such a probability estimate can be considered reliable can be determined. A logit probability model estimated from onshore titanium-bearing heavy-mineral deposit data identifies and quantifies the relative influence of a deposit's physical measures on the chances of the deposit becoming commercial. A principal conclusion that can be drawn from the analysis is that, along with a measure of deposit size, the characteristics most important in predicting commercial potential are grades of the constituent minerals. Total heavy-mineral-bearing sand grade or even total titanium grade (without data on constituent mineral grades) are poor predictors of the deposit's commercial potential. ?? 1988 International Association for Mathematical Geology.

  15. Quantum fluctuation theorems and generalized measurements during the force protocol

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010), 10.1103/PhysRevLett.105.140601; Phys. Rev. E 83, 041114 (2011), 10.1103/PhysRevE.83.041114], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  16. The cognitive behavioural prevention of suicide in psychosis: a clinical trial.

    PubMed

    Tarrier, Nicholas; Kelly, James; Maqsood, Sehar; Snelson, Natasha; Maxwell, Janet; Law, Heather; Dunn, Graham; Gooding, Patricia

    2014-07-01

    Suicide behaviour in psychosis is a significant clinical and social problem. There is a dearth of evidence for psychological interventions designed to reduce suicide risk in this population. To evaluate a novel, manualised, cognitive behavioural treatment protocol (CBSPp) based upon an empirically validated theoretical model. A randomly controlled trial with independent and masked allocated and assessment of CBSPp with TAU (n=25, 24 sessions) compared to TAU alone (n=24) using standardised assessments. Measures of suicide probability, and suicidal ideation were the primary outcomes and measures of hopelessness, depression, psychotic symptoms, functioning, and self-esteem were the secondary outcomes, assessed at 4 and 6 months follow-up. The CBSPp group improved differentially to the TAU group on two out of three primary outcome measures of suicidal ideation and suicide probability, and on secondary outcomes of hopelessness related to suicide probability, depression, some psychotic symptoms and self-esteem. CBSPp is a feasible intervention which has the potential to reduce proxy measures of suicide in psychotic patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Prediction suppression in monkey inferotemporal cortex depends on the conditional probability between images.

    PubMed

    Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R

    2016-01-01

    When monkeys view two images in fixed sequence repeatedly over days and weeks, neurons in area TE of the inferotemporal cortex come to exhibit prediction suppression. The trailing image elicits only a weak response when presented following the leading image that preceded it during training. Induction of prediction suppression might depend either on the contiguity of the images, as determined by their co-occurrence and captured in the measure of joint probability P(A,B), or on their contingency, as determined by their correlation and as captured in the measures of conditional probability P(A|B) and P(B|A). To distinguish between these possibilities, we measured prediction suppression after imposing training regimens that held P(A,B) constant but varied P(A|B) and P(B|A). We found that reducing either P(A|B) or P(B|A) during training attenuated prediction suppression as measured during subsequent testing. We conclude that prediction suppression depends on contingency, as embodied in the predictive relations between the images, and not just on contiguity, as embodied in their co-occurrence. Copyright © 2016 the American Physiological Society.

  18. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  19. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  20. Symptoms of major depression in people with spinal cord injury: implications for screening.

    PubMed

    Bombardier, Charles H; Richards, J Scott; Krause, James S; Tulsky, David; Tate, Denise G

    2004-11-01

    To provide psychometric data on a self-report measure of major depressive disorder (MDD) and to determine whether somatic symptoms are nonspecific or count toward the diagnosis. Survey. Data from the National Spinal Cord Injury Statistical Center representing 16 Model Spinal Cord Injury Systems. Eight hundred forty-nine people with spinal cord injury who completed a standardized follow-up evaluation 1 year after injury. Not applicable. The Patient Health Questionnaire-9 (PHQ-9), a measure of MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition . We computed descriptive statistics on rates of depressive symptoms and probable MDD, evaluated internal consistency and construct validity, and analyzed the accuracy of individual items as predictors of MDD. Exactly 11.4% of participants met criteria for probable MDD. Probable MDD was associated with poorer subjective health, lower satisfaction with life, and more difficulty in daily role functioning. Probable MDD was not related to most demographic or injury-related variables. Both somatic and psychologic symptoms predicted probable MDD. The PHQ-9 has promise as a tool with which to identify probable MDD in people with SCI. Somatic symptoms should be counted toward the diagnosis and should alert health care providers to the likelihood of MDD. More efficient screening is only one of the quality improvement efforts needed to enhance management of MDD.

  1. The Preliminary Development of a Robotic Laser System Used for Ophthalmic Surgery

    DTIC Science & Technology

    1988-01-01

    proposed design, there is not sufficient computer time to ensure a zero probability of * error. But, what’s more important than a zero probability of...even zero proved to shorten the computation time. 4.3.6 The User Interface To put things in perspective, the step by step procedure for using the routine...was measured from the identified slice. The sectional area was measured using a Summa- graphic digitizing pad and the Sigma-scan program from Jandel

  2. Systematic sampling for suspended sediment

    Treesearch

    Robert B. Thomas

    1991-01-01

    Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...

  3. Probabilistic Relational Structures and Their Applications

    ERIC Educational Resources Information Center

    Domotor, Zoltan

    The principal objects of the investigation reported were, first, to study qualitative probability relations on Boolean algebras, and secondly, to describe applications in the theories of probability logic, information, automata, and probabilistic measurement. The main contribution of this work is stated in 10 definitions and 20 theorems. The basic…

  4. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  5. [WebSurvCa: web-based estimation of death and survival probabilities in a cohort].

    PubMed

    Clèries, Ramon; Ameijide, Alberto; Buxó, Maria; Vilardell, Mireia; Martínez, José Miguel; Alarcón, Francisco; Cordero, David; Díez-Villanueva, Ana; Yasui, Yutaka; Marcos-Gragera, Rafael; Vilardell, Maria Loreto; Carulla, Marià; Galceran, Jaume; Izquierdo, Ángel; Moreno, Víctor; Borràs, Josep M

    2018-01-19

    Relative survival has been used as a measure of the temporal evolution of the excess risk of death of a cohort of patients diagnosed with cancer, taking into account the mortality of a reference population. Once the excess risk of death has been estimated, three probabilities can be computed at time T: 1) the crude probability of death associated with the cause of initial diagnosis (disease under study), 2) the crude probability of death associated with other causes, and 3) the probability of absolute survival in the cohort at time T. This paper presents the WebSurvCa application (https://shiny.snpstats.net/WebSurvCa/), whereby hospital-based and population-based cancer registries and registries of other diseases can estimate such probabilities in their cohorts by selecting the mortality of the relevant region (reference population). Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  7. Newsvendor problem under complete uncertainty: a case of innovative products.

    PubMed

    Gaspars-Wieloch, Helena

    2017-01-01

    The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.

  8. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  9. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  10. Factors influencing reporting and harvest probabilities in North American geese

    USGS Publications Warehouse

    Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.

    2009-01-01

    We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.

  11. MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, A

    2016-06-15

    Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less

  12. On variational definition of quantum entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belavkin, Roman V.

    Entropy of distribution P can be defined in at least three different ways: 1) as the expectation of the Kullback-Leibler (KL) divergence of P from elementary δ-measures (in this case, it is interpreted as expected surprise); 2) as a negative KL-divergence of some reference measure ν from the probability measure P; 3) as the supremum of Shannon’s mutual information taken over all channels such that P is the output probability, in which case it is dual of some transportation problem. In classical (i.e. commutative) probability, all three definitions lead to the same quantity, providing only different interpretations of entropy. Inmore » non-commutative (i.e. quantum) probability, however, these definitions are not equivalent. In particular, the third definition, where the supremum is taken over all entanglements of two quantum systems with P being the output state, leads to the quantity that can be twice the von Neumann entropy. It was proposed originally by V. Belavkin and Ohya [1] and called the proper quantum entropy, because it allows one to define quantum conditional entropy that is always non-negative. Here we extend these ideas to define also quantum counterpart of proper cross-entropy and cross-information. We also show inequality for the values of classical and quantum information.« less

  13. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    PubMed Central

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706

  14. Rate and reaction probability of the surface reaction between ozone and dihydromyrcenol measured in a bench scale reactor and a room-sized chamber

    NASA Astrophysics Data System (ADS)

    Shu, Shi; Morrison, Glenn C.

    2012-02-01

    Low volatility terpenoids emitted from consumer products can react with ozone on surfaces and may significantly alter concentrations of ozone, terpenoids and reaction products in indoor air. We measured the reaction probability and a second-order surface-specific reaction rate for the ozonation of dihydromyrcenol, a representative indoor terpenoid, adsorbed onto polyvinylchloride (PVC), glass, and latex paint coated spheres. The reaction probability ranged from (0.06-8.97) × 10 -5 and was very sensitive to humidity, substrate and mass adsorbed. The average surface reaction probability is about 10 times greater than that for the gas-phase reaction. The second-order surface-specific rate coefficient ranged from (0.32-7.05) × 10 -15 cm 4 s -1 molecule -1and was much less sensitive to humidity, substrate, or mass adsorbed. We also measured the ozone deposition velocity due to adsorbed dihydromyrcenol on painted drywall in a room-sized chamber, Based on that, we calculated the rate coefficient ((0.42-1.6) × 10 -15 cm 4 molecule -1 s -1), which was consistent with that derived from bench-scale experiments for the latex paint under similar conditions. We predict that more than 95% of dihydromyrcenol oxidation takes place on indoor surfaces, rather than in building air.

  15. Auditory Processing of Older Adults with Probable Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Lister, Jennifer J.; Elias, Maya N.; Tetlow, Amber M.; Sardina, Angela L.; Sadeq, Nasreen A.; Brandino, Amanda D.; Bush, Aryn L. Harrison

    2017-01-01

    Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in…

  16. Public Attitudes toward Stuttering in Turkey: Probability versus Convenience Sampling

    ERIC Educational Resources Information Center

    Ozdemir, R. Sertan; St. Louis, Kenneth O.; Topbas, Seyhun

    2011-01-01

    Purpose: A Turkish translation of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. Method: A convenience sample of adults in Eskisehir, Turkey was compared with two replicates of a school-based,…

  17. Definition and Measurement of Selection Bias: From Constant Ratio to Constant Difference

    ERIC Educational Resources Information Center

    Cahan, Sorel; Gamliel, Eyal

    2006-01-01

    Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…

  18. Alternative Frameworks for the Reconciliation of Probability Assessments

    DTIC Science & Technology

    1980-11-01

    probability. The mental gymnastics required are too difficult. In fact, we see that the only feelings of discomfort S truly has concern imoving from q...some form oG entropy measure might well be appropriate here. A more serious difficulty arises with the actual hill-climbing algorithm. We cannot be

  19. Spatial Probability Cuing and Right Hemisphere Damage

    ERIC Educational Resources Information Center

    Shaqiri, Albulena; Anderson, Britt

    2012-01-01

    In this experiment we studied statistical learning, inter-trial priming, and visual attention. We assessed healthy controls and right brain damaged (RBD) patients with and without neglect, on a simple visual discrimination task designed to measure priming effects and probability learning. All participants showed a preserved priming effect for item…

  20. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  1. Measurement error in earnings data: Using a mixture model approach to combine survey and register data.

    PubMed

    Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.

  2. Some practical universal noiseless coding techniques

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1979-01-01

    Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.

  3. Probabilistic metrology or how some measurement outcomes render ultra-precise estimates

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.

    2016-10-01

    We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.

  4. In-beam Fission Study at JAEA

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  5. In-beam fission study for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  6. Quantitative analysis of the probability of introducing equine encephalosis virus (EEV) into The Netherlands.

    PubMed

    Fischer, Egil Andreas Joor; Martínez López, Evelyn Pamela; De Vos, Clazien J; Faverjon, Céline

    2016-09-01

    Equine encephalosis is a midge-borne viral disease of equines caused by equine encephalosis virus (EEV, Orbivirus, Reoviridae), and closely related to African horse sickness virus (AHSV). EEV and AHSV share common vectors and show similar transmission patterns. Until now EEV has caused outbreaks in Africa and Israel. This study aimed to provide insight in the probability of an EEV outbreak in The Netherlands caused by infected vectors or hosts, the contribution of potential source areas (risk regions) to this probability, and the effectiveness of preventive measures (sanitary regimes). A stochastic risk model constructed for risk assessment of AHSV introduction was adapted to EEV. Source areas were categorized in risk regions (high, low, and very low risk) based on EEV history and the presence of competent vectors. Two possible EEV introduction pathways were considered: importation of infected equines and importation of infected vectors along with their vertebrate hosts. The probability of EEV introduction (PEEV) was calculated by combining the probability of EEV release by either pathway and the probability of EEV establishment. The median current annual probability of EEV introduction by an infected equine was estimated at 0.012 (90% uncertainty interval 0.002-0.020), and by an infected vector at 4.0 10(-5) (90% uncertainty interval 5.3 10(-6)-2.0 10(-4)). Equines from high risk regions contributed most to the probability of EEV introduction with 74% on the EEV introduction by equines, whereas low and very low risk regions contributed 18% and 8%, respectively. International movements of horses participating in equestrian events contributed most to the probability of EEV introduction by equines from high risk regions (86%), but also contributed substantially for low and very low risk regions with 47% and 56%. The probability of introducing EEV into The Netherlands is much higher than the probability of introducing AHSV with equines from high risk countries contributing most. The introduction by an infected equine is the most likely pathway. Control measures before exportation of equines showed to have a strong mitigating effect on the probability of EEV introduction. The risk of EEV outbreaks should be taken into account when altering these import regulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The load of 10 cm of such tephra fallout on a flat surface would therefore be in the range of 100-125 kg/m2; addition of water from rainfall or snowmelt would provide additional load.

  8. Precise Determination of the Intensity of 226Ra Alpha Decay to the 186 keV Excited State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.P. LaMont; R.J. Gehrke; S.E. Glover

    There is a significant discrepancy in the reported values for the emission probability of the 186 keV gamma-ray resulting from the alpha decay of 226 Ra to 186 keV excited state of 222 Rn. Published values fall in the range of 3.28 to 3.59 gamma-rays per 100 alpha-decays. An interesting observation is that the lower value, 3.28, is based on measuring the 186 keV gamma-ray intensity relative to the 226 Ra alpha-branch to the 186 keV level. The higher values, which are close to 3.59, are based on measuring the gamma-ray intensity from mass standards of 226 Ra that aremore » traceable to the mass standards prepared by HÓNIGSCHMID in the early 1930''s. This discrepancy was resolved in this work by carefully measuring the 226 Ra alpha-branch intensities, then applying the theoretical E2 multipolarity internal conversion coefficient of 0.692±0.007 to calculate the 186 keV gamma-ray emission probability. The measured value for the alpha branch to the 186 keV excited state was (6.16±0.03)%, which gives a 186 keV gamma-ray emission probability of (3.64±0.04)%. This value is in excellent agreement with the most recently reported 186 keV gamma-ray emission probabilities determined using 226 Ra mass standards.« less

  9. Characterization of silicon photomultipliers and validation of the electrical model

    NASA Astrophysics Data System (ADS)

    Peng, Peng; Qiang, Yi; Ross, Steve; Burr, Kent

    2018-04-01

    This paper introduces a systematic way to measure most features of the silicon photomultipliers (SiPM). We implement an efficient two-laser procedure to measure the recovery time. Avalanche probability was found to play an important role in explaining the right behavior of the SiPM recovery process. Also, we demonstrate how equivalent circuit parameters measured by optical tests can be used in SPICE modeling to predict details of the time constants relevant to the pulse shape. The SiPM properties measured include breakdown voltage, gain, diode capacitor, quench resistor, quench capacitor, dark count rate, photodetection efficiency, cross-talk and after-pulsing probability, and recovery time. We apply these techniques on the SiPMs from two companies: Hamamatsu and SensL.

  10. Mayer control problem with probabilistic uncertainty on initial positions

    NASA Astrophysics Data System (ADS)

    Marigonda, Antonio; Quincampoix, Marc

    2018-03-01

    In this paper we introduce and study an optimal control problem in the Mayer's form in the space of probability measures on Rn endowed with the Wasserstein distance. Our aim is to study optimality conditions when the knowledge of the initial state and velocity is subject to some uncertainty, which are modeled by a probability measure on Rd and by a vector-valued measure on Rd, respectively. We provide a characterization of the value function of such a problem as unique solution of an Hamilton-Jacobi-Bellman equation in the space of measures in a suitable viscosity sense. Some applications to a pursuit-evasion game with uncertainty in the state space is also discussed, proving the existence of a value for the game.

  11. DNA binding sites characterization by means of Rényi entropy measures on nucleotide transitions.

    PubMed

    Perera, Alexandre; Vallverdu, Montserrat; Claria, Francesc; Soria, José Manuel; Caminal, Pere

    2006-01-01

    In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measure such as Renyi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency based Renyi measures. Results are reported in this manuscript comparing transition frequencies (i.e. dinucelotides) and base frequencies for Shannon and parametric Renyi for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that, for the evaluated datasets, the information provided by both approaches is not redundant, as they evolve differently under increasing Renyi orders.

  12. Author Credit for Transdisciplinary Collaboration

    PubMed Central

    Xu, Jian; Ding, Ying; Malic, Vincent

    2015-01-01

    Transdisciplinary collaboration is the key for innovation. An evaluation mechanism is necessary to ensure that academic credit for this costly process can be allocated fairly among coauthors. This paper proposes a set of quantitative measures (e.g., t_credit and t_index) to reflect authors’ transdisciplinary contributions to publications. These measures are based on paper-topic probability distributions and author-topic probability distributions. We conduct an empirical analysis of the information retrieval domain which demonstrates that these measures effectively improve the results of harmonic_credit and h_index measures by taking into account the transdisciplinary contributions of authors. The definitions of t_credit and t_index provide a fair and effective way for research organizations to assign credit to authors of transdisciplinary publications. PMID:26375678

  13. Delirium superimposed on dementia: defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains.

    PubMed

    Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane

    2011-12-01

    The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.

  14. Prevalence rate, predictors and long-term course of probable posttraumatic stress disorder after major trauma: a prospective cohort study

    PubMed Central

    2012-01-01

    Background Among trauma patients relatively high prevalence rates of posttraumatic stress disorder (PTSD) have been found. To identify opportunities for prevention and early treatment, predictors and course of PTSD need to be investigated. Long-term follow-up studies of injury patients may help gain more insight into the course of PTSD and subgroups at risk for PTSD. The aim of our long-term prospective cohort study was to assess the prevalence rate and predictors, including pre-hospital trauma care (assistance of physician staffed Emergency Medical Services (EMS) at the scene of the accident), of probable PTSD in a sample of major trauma patients at one and two years after injury. The second aim was to assess the long-term course of probable PTSD following injury. Methods A prospective cohort study was conducted of 332 major trauma patients with an Injury Severity Score (ISS) of 16 or higher. We used data from the hospital trauma registry and self-assessment surveys that included the Impact of Event Scale (IES) to measure probable PTSD symptoms. An IES-score of 35 or higher was used as indication for the presence of probable PTSD. Results One year after injury measurements of 226 major trauma patients were obtained (response rate 68%). Of these patients 23% had an IES-score of 35 or higher, indicating probable PTSD. At two years after trauma the prevalence rate of probable PTSD was 20%. Female gender and co-morbid disease were strong predictors of probable PTSD one year following injury, whereas minor to moderate head injury and injury of the extremities (AIS less than 3) were strong predictors of this disorder at two year follow-up. Of the patients with probable PTSD at one year follow-up 79% had persistent PTSD symptoms a year later. Conclusions Up to two years after injury probable PTSD is highly prevalent in a population of patients with major trauma. The majority of patients suffered from prolonged effects of PTSD, underlining the importance of prevention, early detection, and treatment of injury-related PTSD. PMID:23270522

  15. Bipartite discrimination of independently prepared quantum states as a counterexample to a parallel repetition conjecture

    NASA Astrophysics Data System (ADS)

    Akibue, Seiseki; Kato, Go

    2018-04-01

    For distinguishing quantum states sampled from a fixed ensemble, the gap in bipartite and single-party distinguishability can be interpreted as a nonlocality of the ensemble. In this paper, we consider bipartite state discrimination in a composite system consisting of N subsystems, where each subsystem is shared between two parties and the state of each subsystem is randomly sampled from a particular ensemble comprising the Bell states. We show that the success probability of perfectly identifying the state converges to 1 as N →∞ if the entropy of the probability distribution associated with the ensemble is less than 1, even if the success probability is less than 1 for any finite N . In other words, the nonlocality of the N -fold ensemble asymptotically disappears if the probability distribution associated with each ensemble is concentrated. Furthermore, we show that the disappearance of the nonlocality can be regarded as a remarkable counterexample of a fundamental open question in theoretical computer science, called a parallel repetition conjecture of interactive games with two classically communicating players. Measurements for the discrimination task include a projective measurement of one party represented by stabilizer states, which enable the other party to perfectly distinguish states that are sampled with high probability.

  16. Measurement of spin-flip probabilities for ultracold neutrons interacting with nickel phosphorus coated surfaces

    DOE PAGES

    Tang, Zhaowen; Adamek, Evan Robert; Brandt, Aaron; ...

    2016-04-26

    In this paper, we report a measurement of the spin-flip probabilities for ultracold neutrons interacting with surfaces coated with nickel phosphorus. For 50 μm thick nickel phosphorus coated on stainless steel, the spin-flip probability per bounce was found to be β NiP on SS = (3.3 +1.8, -5.6) X 10 -6. For 50 μm thick nickel phosphorus coated on aluminum, the spin-flip probability per bounce was found to be β NiP on Al = (3.6 +2.1, -5.9) X 10 -6. For the copper guide used as reference, the spin flip probability per bounce was found to be β Cu =more » (6.7 + 5.0, -2.5) X 10 -6. The results on the nickel phosphorus-coated surfaces may be interpreted as upper limits, yielding β NiP on SS < 6.2 X 10 -6 (90% C.L.) and β NiP on Al < 7.0 X 10 -6 (90% C.L.) for 50 μm thick nickel phosphorus coated on stainless steel and 50 μm thick nickel phosphorus coated on aluminum, respectively. Finally, nickel phosphorus coated stainless steel or aluminum provides a solution when low-cost, mechanically robust, and non-depolarizing UCN guides with a high Fermi potential are needed.« less

  17. Delay or probability discounting in a model of impulsive behavior: effect of alcohol.

    PubMed Central

    Richards, J B; Zhang, L; Mitchell, S H; de Wit, H

    1999-01-01

    Little is known about the acute effects of drugs of abuse on impulsivity and self-control. In this study, impulsivity was assessed in humans using a computer task that measured delay and probability discounting. Discounting describes how much the value of a reward (or punisher) is decreased when its occurrence is either delayed or uncertain. Twenty-four healthy adult volunteers ingested a moderate dose of ethanol (0.5 or 0.8 g/kg ethanol: n = 12 at each dose) or placebo before completing the discounting task. In the task the participants were given a series of choices between a small, immediate, certain amount of money and $10 that was either delayed (0, 2, 30, 180, or 365 days) or probabilistic (i.e., certainty of receipt was 1.0, .9, .75, .5, or .25). The point at which each individual was indifferent between the smaller immediate or certain reward and the $10 delayed or probabilistic reward was identified using an adjusting-amount procedure. The results indicated that (a) delay and probability discounting were well described by a hyperbolic function; (b) delay and probability discounting were positively correlated within subjects; (c) delay and probability discounting were moderately correlated with personality measures of impulsivity; and (d) alcohol had no effect on discounting. PMID:10220927

  18. Improving estimates of tree mortality probability using potential growth rate

    USGS Publications Warehouse

    Das, Adrian J.; Stephenson, Nathan L.

    2015-01-01

    Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.

  19. Effects of delay and probability combinations on discounting in humans.

    PubMed

    Cox, David J; Dallery, Jesse

    2016-10-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Improving default risk prediction using Bayesian model uncertainty techniques.

    PubMed

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  1. Simplified tools for measuring retention in care in antiretroviral treatment program in Ethiopia: cohort and current retention in care.

    PubMed

    Assefa, Yibeltal; Worku, Alemayehu; Wouters, Edwin; Koole, Olivier; Haile Mariam, Damen; Van Damme, Wim

    2012-01-01

    Patient retention in care is a critical challenge for antiretroviral treatment programs. This is mainly because retention in care is related to adherence to treatment and patient survival. It is therefore imperative that health facilities and programs measure patient retention in care. However, the currently available tools, such as Kaplan Meier, for measuring retention in care have a lot of practical limitations. The objective of this study was to develop simplified tools for measuring retention in care. Retrospective cohort data were collected from patient registers in nine health facilities in Ethiopia. Retention in care was the primary outcome for the study. Tools were developed to measure "current retention" in care during a specific period of time for a specific "ART-age group" and "cohort retention" in care among patients who were followed for the last "Y" number of years on ART. "Probability of retention" based on the tool for "cohort retention" in care was compared with "probability of retention" based on Kaplan Meier. We found that the new tools enable to measure "current retention" and "cohort retention" in care. We also found that the tools were easy to use and did not require advanced statistical skills. Both "current retention" and "cohort retention" are lower among patients in the first two "ART-age groups" and "ART-age cohorts" than in subsequent "ART-age groups" and "ART-age cohorts". The "probability of retention" based on the new tools were found to be similar to the "probability of retention" based on Kaplan Meier. The simplified tools for "current retention" and "cohort retention" will enable practitioners and program managers to measure and monitor rates of retention in care easily and appropriately. We therefore recommend that health facilities and programs start to use these tools in their efforts to improve retention in care and patient outcomes.

  2. Systematic review: Efficacy and safety of medical marijuana in selected neurologic disorders

    PubMed Central

    Koppel, Barbara S.; Brust, John C.M.; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-01-01

    Objective: To determine the efficacy of medical marijuana in several neurologic conditions. Methods: We performed a systematic review of medical marijuana (1948–November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Results: Thirty-four studies met inclusion criteria; 8 were rated as Class I. Conclusions: The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non–chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications. PMID:24778283

  3. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders: report of the Guideline Development Subcommittee of the American Academy of Neurology.

    PubMed

    Koppel, Barbara S; Brust, John C M; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-04-29

    To determine the efficacy of medical marijuana in several neurologic conditions. We performed a systematic review of medical marijuana (1948-November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Thirty-four studies met inclusion criteria; 8 were rated as Class I. The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non-chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications.

  4. Probability concepts in quality risk management.

    PubMed

    Claycamp, H Gregg

    2012-01-01

    Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management.

  5. Re-Conceptualization of Modified Angoff Standard Setting: Unified Statistical, Measurement, Cognitive, and Social Psychological Theories

    ERIC Educational Resources Information Center

    Iyioke, Ifeoma Chika

    2013-01-01

    This dissertation describes a design for training, in accordance with probability judgment heuristics principles, for the Angoff standard setting method. The new training with instruction, practice, and feedback tailored to the probability judgment heuristics principles was called the Heuristic training and the prevailing Angoff method training…

  6. Assessing bat detectability and occupancy with multiple automated echolocation detectors

    Treesearch

    Marcos P. Gorresen; Adam C. Miles; Christopher M. Todd; Frank J. Bonaccorso; Theodore J. Weller

    2008-01-01

    Occupancy analysis and its ability to account for differential detection probabilities is important for studies in which detecting echolocation calls is used as a measure of bat occurrence and activity. We examined the feasibility of remotely acquiring bat encounter histories to estimate detection probability and occupancy. We used echolocation detectors coupled o...

  7. Prescribed burning and wildfire risk in the 1998 fire season in Florida

    Treesearch

    John M. Pye; Jeffrey P. Prestemon; David T. Butry; Karen L. Abt

    2003-01-01

    Measures of understory burning activity in and around FIA plots in northeastern Florida were not significantly associated with reduced burning probability in the extreme fire season of 1998. In this unusual year, burn probability was greatest on ordinarily wetter sites, especially baldcypress stands, and positively associated with understory vegetation. Moderate...

  8. MEASUREMENT OF CHILDREN'S EXPOSURE TO PESTICIDES: ANALYSIS OF URINARY METABOLITE LEVELS IN A PROBABILITY-BASED SAMPLE

    EPA Science Inventory

    The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...

  9. MEASUREMENT OF MULTI-POLLUTANT AND MULTI-PATHWAY EXPOSURES IN A PROBABILITY-BASED SAMPLE OF CHILDREN: PRACTICAL STRATEGIES FOR EFFECTIVE FIELD STUDIES

    EPA Science Inventory

    The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....

  10. Early diagnosis in glaucoma.

    PubMed

    Garway-Heath, David F

    2008-01-01

    This chapter reviews the evidence for the clinical application of vision function tests and imaging devices to identify early glaucoma, and sets out a scheme for the appropriate use and interpretation of test results in screening/case-finding and clinic settings. In early glaucoma, signs may be equivocal and the diagnosis is often uncertain. Either structural damage or vision function loss may be the first sign of glaucoma; neither one is consistently apparent before the other. Quantitative tests of visual function and measurements of optic-nerve head and retinal nerve fiber layer anatomy are useful to either raise or lower the probability that glaucoma is present. The posttest probability for glaucoma may be calculated from the pretest probability and the likelihood ratio of the diagnostic criterion, and the output of several diagnostic devices may be combined to achieve a final probability. However, clinicians need to understand how these diagnostic devices make their measurements, so that the validity of each test result can be adequately assessed. Only then should the result be used, together with the patient history and clinical examination, to derive a diagnosis.

  11. Laser Ignition Microthruster Experiments on KKS-1

    NASA Astrophysics Data System (ADS)

    Nakano, Masakatsu; Koizumi, Hiroyuki; Watanabe, Masashi; Arakawa, Yoshihiro

    A laser ignition microthruster has been developed for microsatellites. Thruster performances such as impulse and ignition probability were measured, using boron potassium nitrate (B/KNO3) solid propellant ignited by a 1 W CW laser diode. The measured impulses were 60 mNs ± 15 mNs with almost 100 % ignition probability. The effect of the mixture ratios of B/KNO3 on thruster performance was also investigated, and it was shown that mixture ratios between B/KNO3/binder = 28/70/2 and 38/60/2 exhibited both high ignition probability and high impulse. Laser ignition thrusters designed and fabricated based on these data became the first non-conventional microthrusters on the Kouku Kousen Satellite No. 1 (KKS-1) microsatellite that was launched by a H2A rocket as one of six piggyback satellites in January 2009.

  12. Improved log(gf ) Values of Selected Lines in Mn I and Mn II for Abundance Determinations in FGK Dwarfs and Giants

    NASA Astrophysics Data System (ADS)

    Den Hartog, E. A.; Lawler, J. E.; Sobeck, J. S.; Sneden, C.; Cowan, J. J.

    2011-06-01

    The goal of the present work is to produce transition probabilities with very low uncertainties for a selected set of multiplets of Mn I and Mn II. Multiplets are chosen based upon their suitability for stellar abundance analysis. We report on new radiative lifetime measurements for 22 levels of Mn I from the e 8 D, z 6 P, z 6 D, z 4 F, e 8 S, and e 6 S terms and six levels of Mn II from the z 5 P and z 7 P terms using time-resolved laser-induced fluorescence on a slow atom/ion beam. New branching fractions for transitions from these levels, measured using a Fourier-transform spectrometer, are reported. When combined, these measurements yield transition probabilities for 47 transitions of Mn I and 15 transitions of Mn II. Comparisons are made to data from the literature and to Russell-Saunders (LS) theory. In keeping with the goal of producing a set of transition probabilities with the highest possible accuracy and precision, we recommend a weighted mean result incorporating our measurements on Mn I and II as well as independent measurements or calculations that we view as reliable and of a quality similar to ours. In a forthcoming paper, these Mn I/II transition probability data will be utilized to derive the Mn abundance in stars with spectra from both space-based and ground-based facilities over a 4000 Å wavelength range. With the employment of a local thermodynamic equilibrium line transfer code, the Mn I/II ionization balance will be determined for stars of different evolutionary states.

  13. Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.

    PubMed

    Robinson, Risa J; Snyder, Pam; Oldham, Michael J

    2007-05-01

    Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.

  14. Landsat 9 OLI 2 focal plane subsystem: design, performance, and status

    NASA Astrophysics Data System (ADS)

    Malone, Kevin J.; Schrein, Ronald J.; Bradley, M. Scott; Irwin, Ronda; Berdanier, Barry; Donley, Eric

    2017-09-01

    The Landsat 9 mission will continue the legacy of Earth remote sensing that started in 1972. The Operational Land Imager 2 (OLI 2) is one of two instruments on the Landsat 9 satellite. The OLI 2 instrument is essentially a copy of the OLI instrument flying on Landsat 8. A key element of the OLI 2 instrument is the focal plane subsystem, or FPS, which consists of the focal plane array (FPA), the focal plane electronics (FPE) box, and low-thermal conductivity cables. This paper presents design details of the OLI 2 FPS. The FPA contains 14 critically-aligned focal plane modules (FPM). Each module contains 6 visible/near-IR (VNIR) detector arrays and three short-wave infrared (SWIR) arrays. A complex multi-spectral optical filter is contained in each module. Redundant pixels for each array provide exceptional operability. Spare detector modules from OLI were recharacterized after six years of storage. Radiometric test results are presented and compared with data recorded in 2010. Thermal, optical, mechanical and structural features of the FPA will be described. Special attention is paid to the thermal design of the FPA since thermal stability is crucial to ensuring low-noise and low-drift operation of the detectors which operate at -63°C. The OLI 2 FPE provides power, timing, and control to the focal plane modules. It also digitizes the video data and formats it for the solid-state recorder. Design improvements to the FPA-FPE cables will be discussed and characterization data will be presented. The paper will conclude with the status of the flight hardware assembly and testing.

  15. Characterization of bifunctional L-glutathione synthetases from Actinobacillus pleuropneumoniae and Actinobacillus succinogenes for efficient glutathione biosynthesis.

    PubMed

    Yang, Jianhua; Li, Wei; Wang, Dezheng; Wu, Hui; Li, Zhimin; Ye, Qin

    2016-07-01

    Glutathione (GSH), an important bioactive substance, is widely applied in pharmaceutical and food industries. In this work, two bifunctional L-glutathione synthetases (GshF) from Actinobacillus pleuropneumoniae (GshFAp) and Actinobacillus succinogenes (GshFAs) were successfully expressed in Escherichia coli BL-21(DE3). Similar to the GshF from Streptococcus thermophilus (GshFSt), GshFAp and GshFAs can be applied for high titer GSH production because they are less sensitive to end-product inhibition (Ki values 33 and 43 mM, respectively). The active catalytic forms of GshFAs and GshFAp are dimers, consistent with those of GshFPm (GshF from Pasteurella multocida) and GshFSa (GshF from Streptococcus agalactiae), but are different from GshFSt (GshF from S. thermophilus) which is an active monomer. The analysis of the protein sequences and three dimensional structures of GshFs suggested that the binding sites of GshFs for substrates, L-cysteine, L-glutamate, γ-glutamylcysteine, adenosine-triphosphate, and glycine are highly conserved with only very few differences. With sufficient supply of the precursors, the recombinant strains BL-21(DE3)/pET28a-gshFas and BL-21(DE3)/pET28a-gshFap were able to produce 36.6 and 34.1 mM GSH, with the molar yield of 0.92 and 0.85 mol/mol, respectively, based on the added L-cysteine. The results showed that GshFAp and GshFAs are potentially good candidates for industrial GSH production.

  16. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon, J. M.; Weiss, C.

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  17. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE PAGES

    Alarcon, J. M.; Weiss, C.

    2018-05-08

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  18. Consequences of early extraction of compromised first permanent molar: a systematic review.

    PubMed

    Saber, Afnan M; Altoukhi, Doua H; Horaib, Mariam F; El-Housseiny, Azza A; Alamoudi, Najlaa M; Sabbagh, Heba J

    2018-04-05

    The aim of this study was to systematically review the literature to determine the sequelae of early extraction of compromised first permanent molars (FPMs) with regard to the skeletal and dental development of 5- to 15-year-old children. Meta-analysis was conducted when applicable. Our research protocol included a search strategy, inclusion/exclusion criteria, and a data extraction plan. The search engines used were PubMed, Scopus, and Science Direct. Study selection was performed independently by three reviewers. Articles published from 1960 to 2017 were reviewed based on inclusion and exclusion criteria. Meta-analysis was performed to compare space closure between upper and lower arches. Eleven studies fulfilled the inclusion criteria. The consequences were decrease in post extraction space, accelerated development and eruption of second permanents molars (SPMs) and third molars, a decrease in caries and/or fillings on the proximal surfaces of adjacent teeth, lingual tipping and retrusion of incisors, and counter clockwise rotation of the occlusal plane. There were several consequences of early extraction of FPMs, which were related to skeletal and dental development. Our systematic review suggests that comprehensive evaluation of the compromised FPMs should be performed before planning an extraction. The ideal time for FPM extraction is when the SPM is at the early bifurcation stage in order to achieve complete closure of the extraction space by the SPM. Benefits should be weighed over the risks to decrease the risk of unfavorable outcomes as much as possible. However, due to the limited evidence on the outcomes and variables that influence them, high-quality prospective studies are needed.

  19. Statistical approaches to the analysis of point count data: A little extra information can go a long way

    USGS Publications Warehouse

    Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.

  20. Head multidetector computed tomography: emergency medicine physicians overestimate the pretest probability and legal risk of significant findings.

    PubMed

    Baskerville, Jerry Ray; Herrick, John

    2012-02-01

    This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  2. Usefulness of antigen-specific IgE probability curves derived from the 3gAllergy assay in diagnosing egg, cow's milk, and wheat allergies.

    PubMed

    Sato, Sakura; Ogura, Kiyotake; Takahashi, Kyohei; Sato, Yasunori; Yanagida, Noriyuki; Ebisawa, Motohiro

    2017-04-01

    Specific IgE (sIgE) antibody detection using the Siemens IMMULITE ® 3gAllergy™ (3gAllergy) assay have not been sufficiently examined for the diagnosis of food allergy. The aim of this study was to evaluate the utility of measuring sIgE levels using the 3gAllergy assay to diagnose allergic reactions to egg, milk, and wheat. This retrospective study was conducted on patients with diagnosed or suspected allergies to egg, milk and wheat. Patients were divided into two groups according to their clinical reactivity to these allergens based on oral food challenge outcomes and/or convincing histories of immediate reaction to causative food(s). The sIgE levels were measured using 3gAllergy and ImmunoCAP. Predicted probability curves were estimated using logistic regression analysis. We analyzed 1561 patients, ages 0-19 y (egg = 436, milk = 499, wheat = 626). The sIgE levels determined using 3gAllergy correlated with those of ImmunoCAP, classifying 355 patients as symptomatic: egg = 149, milk = 123, wheat = 83. 3gAllergy sIgE levels were significantly higher in symptomatic than in asymptomatic patients (P < 0.0001). Predictive probability for positive food allergy was significantly increased and correlated with increased sIgE levels. The cut-offs for allergic reaction with 95% predictive probability as determined by the 3gAllergy probability curves were different from those of ImmunoCAP. Measurements of sIgE against egg, milk, and wheat as determined by 3gAllergy may be used as a tool to facilitate the diagnosis of food allergy in subjects with suspected food allergies. However, these probability curves should not be applied interchangeably between different assays. Copyright © 2016 Japanese Society of Allergology. Production and hosting by Elsevier B.V. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew

    Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less

  4. Review of literature surface tension data for molten silicon

    NASA Technical Reports Server (NTRS)

    Hardy, S.

    1981-01-01

    Measurements of the surface tension of molten silicon are reported. For marangoni flow, the important parameter is the variation of surface tension with temperature, not the absolute value of the surface tension. It is not possible to calculate temperature coefficients using surface tension measurements from different experiments because the systematic errors are usually larger than the changes in surface tension because of temperature variations. The lack of good surface tension data for liquid silicon is probably due to its extreme chemical reactivity. A material which resists attack by molten silicon is not found. It is suggested that all of the sessile drip surface tension measurements are probably for silicon which is contaminated by the substrate materials.

  5. Application of fuzzy fault tree analysis based on modified fuzzy AHP and fuzzy TOPSIS for fire and explosion in the process industry.

    PubMed

    Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand

    2018-05-09

    This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.

  6. Lightning Characteristics and Lightning Strike Peak Current Probabilities as Related to Aerospace Vehicle Operations

    NASA Technical Reports Server (NTRS)

    Johnson, Dale L.; Vaughan, William W.

    1998-01-01

    A summary is presented of basic lightning characteristics/criteria for current and future NASA aerospace vehicles. The paper estimates the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating probabilities of launch vehicles/objects being struck by lightning. This paper presents these results.

  7. The case of escape probability as linear in short time

    NASA Astrophysics Data System (ADS)

    Marchewka, A.; Schuss, Z.

    2018-02-01

    We derive rigorously the short-time escape probability of a quantum particle from its compactly supported initial state, which has a discontinuous derivative at the boundary of the support. We show that this probability is linear in time, which seems to be a new result. The novelty of our calculation is the inclusion of the boundary layer of the propagated wave function formed outside the initial support. This result has applications to the decay law of the particle, to the Zeno behaviour, quantum absorption, time of arrival, quantum measurements, and more.

  8. Measurement of the transition probability of the C III 190.9 nanometer intersystem line

    NASA Technical Reports Server (NTRS)

    Kwong, Victor H. S.; Fang, Z.; Gibbons, T. T.; Parkinson, W. H.; Smith, Peter L.

    1993-01-01

    A radio-frequency ion trap has been used to store C(2+) ions created by electron bombardment of CO. The transition probability for the 2s2p 3Po1-2s2 1S0 intersystem line of C m has been measured by recording the radiative decay at 190.9 nm. The measured A-value is 121 +/- 7/s and agrees, within mutual uncertainty limits, with that of Laughlin et al. (1978), but is 20 percent larger than that of Nussbaumer and Storey (1978). The effective collision mixing rate coefficient among the fine structure levels of 3Po and the combined quenching and charge transfer rate coefficients out of the 3Po1 level with the CO source gas have also been measured.

  9. Quantum interval-valued probability: Contextuality and the Born rule

    NASA Astrophysics Data System (ADS)

    Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr

    2018-05-01

    We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.

  10. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Analysis of the impact of safeguards criteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mullen, M.F.; Reardon, P.T.

    As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) ofmore » the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.« less

  12. Quantifying recent erosion and sediment delivery using probability sampling: A case study

    Treesearch

    Jack Lewis

    2002-01-01

    Abstract - Estimates of erosion and sediment delivery have often relied on measurements from locations that were selected to be representative of particular terrain types. Such judgement samples are likely to overestimate or underestimate the mean of the quantity of interest. Probability sampling can eliminate the bias due to sample selection, and it permits the...

  13. Comparison of Content Structure and Cognitive Structure in the Learning of Probability.

    ERIC Educational Resources Information Center

    Geeslin, William E.

    Digraphs, graphs, and task analysis were used to map out the content structure of a programed text (SMSG) in elementary probability. Mathematical structure was defined as the relationship between concepts within a set of abstract systems. The word association technique was used to measure the existing relations (cognitive structure) in S's memory…

  14. An Investigation of Biases and Framing Effects for Risk Analysis: An Information Technology Context

    ERIC Educational Resources Information Center

    Fox, Stuart A.

    2012-01-01

    An elusive and problematic theme of risk management has been managers' ability to effectively measure information technology (IT) risk in terms of degree of impact and probability of occurrence. The background of this problem delves deep into the rational understanding of probability, expected value, economic behavior, and subjective judgment.…

  15. A new model for bed load sampler calibration to replace the probability-matching method

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    In 1977 extensive data were collected to calibrate six Helley-Smith bed load samplers with four sediment particle sizes in a flume at the St. Anthony Falls Hydraulic Laboratory at the University of Minnesota. Because sampler data cannot be collected at the same time and place as ""true"" trap measurements, the ""probability-matching...

  16. A Short History of Probability Theory and Its Applications

    ERIC Educational Resources Information Center

    Debnath, Lokenath; Basu, Kanadpriya

    2015-01-01

    This paper deals with a brief history of probability theory and its applications to Jacob Bernoulli's famous law of large numbers and theory of errors in observations or measurements. Included are the major contributions of Jacob Bernoulli and Laplace. It is written to pay the tricentennial tribute to Jacob Bernoulli, since the year 2013…

  17. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  18. Delay, Probability, and Social Discounting in a Public Goods Game

    ERIC Educational Resources Information Center

    Jones, Bryan A.; Rachlin, Howard

    2009-01-01

    A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual…

  19. A detailed description of the sequential probability ratio test for 2-IMU FDI

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.

  20. The Precise Time Course of Lexical Activation: MEG Measurements of the Effects of Frequency, Probability, and Density in Lexical Decision

    ERIC Educational Resources Information Center

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkanen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick,…

  1. Investigation of translaminar fracture in fibrereinforced composite laminates---applicability of linear elastic fracture mechanics and cohesive-zone model

    NASA Astrophysics Data System (ADS)

    Hou, Fang

    With the extensive application of fiber-reinforced composite laminates in industry, research on the fracture mechanisms of this type of materials have drawn more and more attentions. A variety of fracture theories and models have been developed. Among them, the linear elastic fracture mechanics (LEFM) and cohesive-zone model (CZM) are two widely-accepted fracture models, which have already shown applicability in the fracture analysis of fiber-reinforced composite laminates. However, there remain challenges which prevent further applications of the two fracture models, such as the experimental measurement of fracture resistance. This dissertation primarily focused on the study of the applicability of LEFM and CZM for the fracture analysis of translaminar fracture in fibre-reinforced composite laminates. The research for each fracture model consisted of two sections: the analytical characterization of crack-tip fields and the experimental measurement of fracture resistance parameters. In the study of LEFM, an experimental investigation based on full-field crack-tip displacement measurements was carried out as a way to characterize the subcritical and steady-state crack advances in translaminar fracture of fiber-reinforced composite laminates. Here, the fiber-reinforced composite laminates were approximated as anisotropic solids. The experimental investigation relied on the LEFM theory with a modification with respect to the material anisotropy. Firstly, the full-field crack-tip displacement fields were measured by Digital Image Correlation (DIC). Then two methods, separately based on the stress intensity approach and the energy approach, were developed to measure the crack-tip field parameters from crack-tip displacement fields. The studied crack-tip field parameters included the stress intensity factor, energy release rate and effective crack length. Moreover, the crack-growth resistance curves (R-curves) were constructed with the measured crack-tip field parameters. In addition, an error analysis was carried out with an emphasis on the influence of out-of-plane rotation of specimen. In the study of CZM, two analytical inverse methods, namely the field projection method (FPM) and the separable nonlinear least-squares method, were developed for the extraction of cohesive fracture properties from crack-tip full-field displacements. Firstly, analytical characterizations of the elastic fields around a crack-tip cohesive zone and the cohesive variables within the cohesive zone were derived in terms of an eigenfunction expansion. Then both of the inverse methods were developed based on the analytical characterization. With the analytical inverse methods, the cohesive-zone law (CZL), cohesive-zone size and position can be inversely computed from the cohesive-crack-tip displacement fields. In the study, comprehensive numerical tests were carried out to investigate the applicability and robustness of two inverse methods. From the numerical tests, it was found that the field projection method was very sensitive to noise and thus had limited applicability in practice. On the other hand, the separable nonlinear least-squares method was found to be more noise-resistant and less ill-conditioned. Subsequently, the applicability of separable nonlinear least-squares method was validated with the same translaminar fracture experiment for the study of LEFM. Eventually, it was found that the experimental measurements of R-curves and CZL showed a great agreement, in both of the fracture energy and the predicted load carrying capability. It thus demonstrated the validity of present research for the translaminar fracture of fiber-reinforced composite laminates.

  2. Quantitative Risk Assessment for African Horse Sickness in Live Horses Exported from South Africa

    PubMed Central

    Sergeant, Evan S.

    2016-01-01

    African horse sickness (AHS) is a severe, often fatal, arbovirus infection of horses, transmitted by Culicoides spp. midges. AHS occurs in most of sub-Saharan Africa and is a significant impediment to export of live horses from infected countries, such as South Africa. A stochastic risk model was developed to estimate the probability of exporting an undetected AHS-infected horse through a vector protected pre-export quarantine facility, in accordance with OIE recommendations for trade from an infected country. The model also allows for additional risk management measures, including multiple PCR tests prior to and during pre-export quarantine and optionally during post-arrival quarantine, as well as for comparison of risk associated with exports from a demonstrated low-risk area for AHS and an area where AHS is endemic. If 1 million horses were exported from the low-risk area with no post-arrival quarantine we estimate the median number of infected horses to be 5.4 (95% prediction interval 0.5 to 41). This equates to an annual probability of 0.0016 (95% PI: 0.00015 to 0.012) assuming 300 horses exported per year. An additional PCR test while in vector-protected post-arrival quarantine reduced these probabilities by approximately 12-fold. Probabilities for horses exported from an area where AHS is endemic were approximately 15 to 17 times higher than for horses exported from the low-risk area under comparable scenarios. The probability of undetected AHS infection in horses exported from an infected country can be minimised by appropriate risk management measures. The final choice of risk management measures depends on the level of risk acceptable to the importing country. PMID:26986002

  3. Quantitative Risk Assessment for African Horse Sickness in Live Horses Exported from South Africa.

    PubMed

    Sergeant, Evan S; Grewar, John D; Weyer, Camilla T; Guthrie, Alan J

    2016-01-01

    African horse sickness (AHS) is a severe, often fatal, arbovirus infection of horses, transmitted by Culicoides spp. midges. AHS occurs in most of sub-Saharan Africa and is a significant impediment to export of live horses from infected countries, such as South Africa. A stochastic risk model was developed to estimate the probability of exporting an undetected AHS-infected horse through a vector protected pre-export quarantine facility, in accordance with OIE recommendations for trade from an infected country. The model also allows for additional risk management measures, including multiple PCR tests prior to and during pre-export quarantine and optionally during post-arrival quarantine, as well as for comparison of risk associated with exports from a demonstrated low-risk area for AHS and an area where AHS is endemic. If 1 million horses were exported from the low-risk area with no post-arrival quarantine we estimate the median number of infected horses to be 5.4 (95% prediction interval 0.5 to 41). This equates to an annual probability of 0.0016 (95% PI: 0.00015 to 0.012) assuming 300 horses exported per year. An additional PCR test while in vector-protected post-arrival quarantine reduced these probabilities by approximately 12-fold. Probabilities for horses exported from an area where AHS is endemic were approximately 15 to 17 times higher than for horses exported from the low-risk area under comparable scenarios. The probability of undetected AHS infection in horses exported from an infected country can be minimised by appropriate risk management measures. The final choice of risk management measures depends on the level of risk acceptable to the importing country.

  4. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    PubMed

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  5. Measurement of the electron shake-off in the β-decay of laser-trapped 6He atoms

    NASA Astrophysics Data System (ADS)

    Hong, Ran; Bagdasarova, Yelena; Garcia, Alejandro; Storm, Derek; Sternberg, Matthew; Swanson, Erik; Wauters, Frederik; Zumwalt, David; Bailey, Kevin; Leredde, Arnaud; Mueller, Peter; O'Connor, Thomas; Flechard, Xavier; Liennard, Etienne; Knecht, Andreas; Naviliat-Cuncic, Oscar

    2016-03-01

    Electron shake-off is an important process in many high precision nuclear β-decay measurements searching for physics beyond the standard model. 6He being one of the lightest β-decaying isotopes, has a simple atomic structure. Thus, it is well suited for testing calculations of shake-off effects. Shake-off probabilities from the 23S1 and 23P2 initial states of laser trapped 6He matter for the on-going beta-neutrino correlation study at the University of Washington. These probabilities are obtained by analyzing the time-of-flight distribution of the recoil ions detected in coincidence with the beta particles. A β-neutrino correlation independent analysis approach was developed. The measured upper limit of the double shake-off probability is 2 ×10-4 at 90% confidence level. This result is ~100 times lower than the most recent calculation by Schulhoff and Drake. This work is supported by DOE, Office of Nuclear Physics, under Contract Nos. DE-AC02-06CH11357 and DE-FG02-97ER41020.

  6. Meaner king uses biased bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-15

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases aremore » mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell's original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.« less

  7. Meaner king uses biased bases

    NASA Astrophysics Data System (ADS)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-01

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases are mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell’s original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.

  8. TB3 - Measurement of vibrational-vibrational exchange of highly excited states of diatomic molecules where the collisional probability is approaching unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nachshon, Y.; Coleman, P.

    1975-08-01

    An experimental method, employing a fast population perturbation technique, is described to measure the vibrational-vibrational (VV) collisional probability P/sub r,r-1/sup/v,v+1/ of a diatomic molecule for large vibrational quantum numbers r and v. The relaxation of the perturbed gain of a pair of vibrational levels is a function of the vibrational populations and VV rate constants k/sub r,r-1/sup v,v+1/. The numerical inversion of the VV master rate equations determining this relaxation does not give unique value for k/sub r,r-1/ sup v,v+1/ (or P/sub r,r-1/sup v,v+1), but lower bounds can be evaluated and with empirical formulas, having several adjustable constants, it canmore » be shown that probabilities of the order of unity are required to satisfy the experimental data. The method has been specifically applied to the CO molecule, but other molecules such as HX(X = F, Cl, Br), NO, etc., could also be measured.« less

  9. Isotropic probability measures in infinite-dimensional spaces

    NASA Technical Reports Server (NTRS)

    Backus, George

    1987-01-01

    Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub in :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity) (P sub n to the -1 (B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.

  10. IMPROVED Ti II log(gf) VALUES AND ABUNDANCE DETERMINATIONS IN THE PHOTOSPHERES OF THE SUN AND METAL-POOR STAR HD 84937

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M. P.; Lawler, J. E.; Sneden, C.

    2013-10-01

    Atomic transition probability measurements for 364 lines of Ti II in the UV through near-IR are reported. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a new echelle spectrometer are combined with published radiative lifetimes to determine these transition probabilities. The new results are in generally good agreement with previously reported FTS measurements. Use of the new echelle spectrometer, independent radiometric calibration methods, and independent data analysis routines enables a reduction of systematic errors and overall improvement in transition probability accuracy over previous measurements. The new Ti II data are applied to high-resolution visible and UVmore » spectra of the Sun and metal-poor star HD 84937 to derive new, more accurate Ti abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. The Ti abundances derived using Ti II for these two stars match those derived using Ti I and support the relative Ti/Fe abundance ratio versus metallicity seen in previous studies.« less

  11. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE PAGES

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...

    2017-12-28

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  12. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    NASA Astrophysics Data System (ADS)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.

    2018-03-01

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.

  13. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  14. Spatial probability of soil water repellency in an abandoned agricultural field in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Misiūnė, Ieva

    2015-04-01

    Water repellency is a natural soil property with implications on infiltration, erosion and plant growth. It depends on soil texture, type and amount of organic matter, fungi, microorganisms, and vegetation cover (Doerr et al., 2000). Human activities as agriculture can have implications on soil water repellency (SWR) due tillage and addition of organic compounds and fertilizers (Blanco-Canqui and Lal, 2009; Gonzalez-Penaloza et al., 2012). It is also assumed that SWR has a high small-scale variability (Doerr et al., 2000). The aim of this work is to study the spatial probability of SWR in an abandoned field testing several geostatistical methods, Organic Kriging (OK), Simple Kriging (SK), Indicator Kriging (IK), Probability Kriging (PK) and Disjunctive Kriging (DK). The study area it is located near Vilnius urban area at (54 49' N, 25 22', 104 masl) in Lithuania (Pereira and Oliva, 2013). It was designed a experimental plot with 21 m2 (07x03 m). Inside this area it was measured SWR was measured every 50 cm using the water drop penetration time (WDPT) (Wessel, 1998). A total of 105 points were measured. The probability of SWR was classified in 0 (No probability) to 1 (High probability). The methods accuracy was assessed with the cross validation method. The best interpolation method was the one with the lowest Root Mean Square Error (RMSE). The results showed that the most accurate probability method was SK (RMSE=0.436), followed by DK (RMSE=0.437), IK (RMSE=0.448), PK (RMSE=0.452) and OK (RMSE=0.537). Significant differences were identified among probability tests (Kruskal-Wallis test =199.7597 p<0.001). On average the probability of SWR was high with the OK (0.58±0.08) followed by PK (0.49±0.18), SK (0.32±0.16), DK (0.32±0.15) and IK (0.31±0.16). The most accurate probability methods predicted a lower probability of SWR in the studied plot. The spatial distribution of SWR was different according to the tested technique. Simple Kriging, DK, IK and PK methods identified the high SWR probabilities in the northeast and central part of the plot, while OK observed mainly in the south-western part of the plot. In conclusion, before predict the spatial probability of SWR it is important to test several methods in order to identify the most accurate. Acknowledgments COST action ES1306 (Connecting European connectivity research). References Blanco-Canqui, H., Lal, R. (2009) Extend of water repellency under long-term no-till soils. Geoderma, 149, 171-180. Doerr, S.H., Shakesby, R.A., Walsh, R.P.D. (2000) Soil water repellency: Its causes, characteristics and hydro-geomorphological significance. Earth-Science Reviews, 51, 33-65. Gonzalez-Penaloza, F.A., Cerda, A., Zavala, L.M., Jordan, A., Gimenez-Morera, A., Arcenegui, V. (2012) Do conservative agriculture practices increase soil water repellency? A case study in citrus-croped soils. Soil and Tillage Research, 124, 233-239. Pereira, P., Oliva, M. (2013) Modelling soil water repellency in an abandoned agricultural field, Visnyk Geology, Visnyk Geology 4, 77-80. Wessel, A.T. (1988) On using the effective contact angle and the water drop penetration time for classification of water repellency in dune soils. Earth Surface Process and Landforms, 13, 555-265.

  15. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  16. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    PubMed

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  17. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  18. Efficient and faithful remote preparation of arbitrary three- and four-particle -class entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu

    2015-06-01

    We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.

  19. Harsh environments and the evolution of multi-player cooperation.

    PubMed

    De Jaegher, Kris

    2017-02-01

    The game-theoretic model in this paper provides micro-foundations for the effect a harsher environment on the probability of cooperation among multiple players. The harshness of the environment is alternatively measured by the degree of complementarity between the players' cooperative efforts in producing a public good, and by the number of attacks on an existing public good that the players can collectively defend, where it is shown that these two measures of the degree of adversity facing the players operate in a similar fashion. We show that the effect of the degree of adversity on the probability of cooperation is monotonous, and has an opposite sign for smaller and for larger cooperation costs. For intermediate cooperation costs, we show that the effect of a harsher environment on the probability of cooperation is hill-shaped. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  1. The Statistical Loop Analyzer (SLA)

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.

    1985-01-01

    The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.

  2. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  3. A comparison of LMC and SDL complexity measures on binomial distributions

    NASA Astrophysics Data System (ADS)

    Piqueira, José Roberto C.

    2016-02-01

    The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.

  4. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    PubMed

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk < 1.0 as "incapable" (1). A C pk > 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  5. Geospatial tools effectively estimate nonexceedance probabilities of daily streamflow at ungauged and intermittently gauged locations in Ohio

    USGS Publications Warehouse

    Farmer, William H.; Koltun, Greg

    2017-01-01

    Study regionThe state of Ohio in the United States, a humid, continental climate.Study focusThe estimation of nonexceedance probabilities of daily streamflows as an alternative means of establishing the relative magnitudes of streamflows associated with hydrologic and water-quality observations.New hydrological insights for the regionSeveral methods for estimating nonexceedance probabilities of daily mean streamflows are explored, including single-index methodologies (nearest-neighboring index) and geospatial tools (kriging and topological kriging). These methods were evaluated by conducting leave-one-out cross-validations based on analyses of nearly 7 years of daily streamflow data from 79 unregulated streamgages in Ohio and neighboring states. The pooled, ordinary kriging model, with a median Nash–Sutcliffe performance of 0.87, was superior to the single-site index methods, though there was some bias in the tails of the probability distribution. Incorporating network structure through topological kriging did not improve performance. The pooled, ordinary kriging model was applied to 118 locations without systematic streamgaging across Ohio where instantaneous streamflow measurements had been made concurrent with water-quality sampling on at least 3 separate days. Spearman rank correlations between estimated nonexceedance probabilities and measured streamflows were high, with a median value of 0.76. In consideration of application, the degree of regulation in a set of sample sites helped to specify the streamgages required to implement kriging approaches successfully.

  6. Decoupled choice-driven and stimulus-related activity in parietal neurons may be misrepresented by choice probabilities.

    PubMed

    Zaidel, Adam; DeAngelis, Gregory C; Angelaki, Dora E

    2017-09-28

    Trial-by-trial correlations between neural responses and choices (choice probabilities) are often interpreted to reflect a causal contribution of neurons to task performance. However, choice probabilities may arise from top-down, rather than bottom-up, signals. We isolated distinct sensory and decision contributions to single-unit activity recorded from the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of monkeys during perception of self-motion. Superficially, neurons in both areas show similar tuning curves during task performance. However, tuning in MSTd neurons primarily reflects sensory inputs, whereas choice-related signals dominate tuning in VIP neurons. Importantly, the choice-related activity of VIP neurons is not predictable from their stimulus tuning, and these factors are often confounded in choice probability measurements. This finding was confirmed in a subset of neurons for which stimulus tuning was measured during passive fixation. Our findings reveal decoupled stimulus and choice signals in the VIP area, and challenge our understanding of choice signals in the brain.Choice-related signals in neuronal activity may reflect bottom-up sensory processes, top-down decision-related influences, or a combination of the two. Here the authors report that choice-related activity in VIP neurons is not predictable from their stimulus tuning, and that dominant choice signals can bias the standard metric of choice preference (choice probability).

  7. Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar

    NASA Astrophysics Data System (ADS)

    Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio

    2018-03-01

    This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.

  8. The influence of surface properties on the plasma dynamics in radio-frequency driven oxygen plasmas: Measurements and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur; Niemi, Kari; O'Connell, Deborah

    2013-12-09

    Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.

  9. Effect of Noise on the Relaxation to an Invariant Probability Measure of Nonhyperbolic Chaotic Attractors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anishchenko, Vadim S.; Vadivasova, Tatjana E.; Kopeikin, Andrey S.

    2001-07-30

    We study the influence of external noise on the relaxation to an invariant probability measure for two types of nonhyperbolic chaotic attractors, a spiral (or coherent) and a noncoherent one. We find that for the coherent attractor the rate of mixing changes under the influence of noise, although the largest Lyapunov exponent remains almost unchanged. A mechanism of the noise influence on mixing is presented which is associated with the dynamics of the instantaneous phase of chaotic trajectories. This also explains why the noncoherent regime is robust against the presence of external noise.

  10. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  11. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  12. Crash probability estimation via quantifying driver hazard perception.

    PubMed

    Li, Yang; Zheng, Yang; Wang, Jianqiang; Kodaka, Kenji; Li, Keqiang

    2018-07-01

    Crash probability estimation is an important method to predict the potential reduction of crash probability contributed by forward collision avoidance technologies (FCATs). In this study, we propose a practical approach to estimate crash probability, which combines a field operational test and numerical simulations of a typical rear-end crash model. To consider driver hazard perception characteristics, we define a novel hazard perception measure, called as driver risk response time, by considering both time-to-collision (TTC) and driver braking response to impending collision risk in a near-crash scenario. Also, we establish a driving database under mixed Chinese traffic conditions based on a CMBS (Collision Mitigation Braking Systems)-equipped vehicle. Applying the crash probability estimation in this database, we estimate the potential decrease in crash probability owing to use of CMBS. A comparison of the results with CMBS on and off shows a 13.7% reduction of crash probability in a typical rear-end near-crash scenario with a one-second delay of driver's braking response. These results indicate that CMBS is positive in collision prevention, especially in the case of inattentive drivers or ole drivers. The proposed crash probability estimation offers a practical way for evaluating the safety benefits in the design and testing of FCATs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  14. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  15. Measurements of wavelength-dependent double photoelectron emission from single photons in VUV-sensitive photomultiplier tubes

    NASA Astrophysics Data System (ADS)

    Faham, C. H.; Gehman, V. M.; Currie, A.; Dobi, A.; Sorensen, P.; Gaitskell, R. J.

    2015-09-01

    Measurements of double photoelectron emission (DPE) probabilities as a function of wavelength are reported for Hamamatsu R8778, R8520, and R11410 VUV-sensitive photomultiplier tubes (PMTs). In DPE, a single photon strikes the PMT photocathode and produces two photoelectrons instead of a single one. It was found that the fraction of detected photons that result in DPE emission is a function of the incident photon wavelength, and manifests itself below ~250 nm. For the xenon scintillation wavelength of 175 nm, a DPE probability of 18-24% was measured depending on the tube and measurement method. This wavelength-dependent single photon response has implications for the energy calibration and photon counting of current and future liquid xenon detectors such as LUX, LZ, XENON100/1T, Panda-X and XMASS.

  16. DNA binding site characterization by means of Rényi entropy measures on nucleotide transitions.

    PubMed

    Perera, A; Vallverdu, M; Claria, F; Soria, J M; Caminal, P

    2008-06-01

    In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measures such as Rényi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency-based Rényi measures. Results are reported in this work comparing transition frequencies (i.e., dinucleotides) and base frequencies for Shannon and parametric Rényi entropies for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that the information provided by both approaches is not redundant. Furthermore, under the presence of noise in the binding site matrix we observe overall improved robustness of nucleotide transition-based algorithms when compared with nucleotide frequency-based method.

  17. Local Structure Theory for Cellular Automata.

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard Andrew

    The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.

  18. Targeting the probability versus cost of feared outcomes in public speaking anxiety.

    PubMed

    Nelson, Elizabeth A; Deacon, Brett J; Lickel, James J; Sy, Jennifer T

    2010-04-01

    Cognitive-behavioral theory suggests that social phobia is maintained, in part, by overestimates of the probability and cost of negative social events. Indeed, empirically supported cognitive-behavioral treatments directly target these cognitive biases through the use of in vivo exposure or behavioral experiments. While cognitive-behavioral theories and treatment protocols emphasize the importance of targeting probability and cost biases in the reduction of social anxiety, few studies have examined specific techniques for reducing probability and cost bias, and thus the relative efficacy of exposure to the probability versus cost of negative social events is unknown. In the present study, 37 undergraduates with high public speaking anxiety were randomly assigned to a single-session intervention designed to reduce either the perceived probability or the perceived cost of negative outcomes associated with public speaking. Compared to participants in the probability treatment condition, those in the cost treatment condition demonstrated significantly greater improvement on measures of public speaking anxiety and cost estimates for negative social events. The superior efficacy of the cost treatment condition was mediated by greater treatment-related changes in social cost estimates. The clinical implications of these findings are discussed. Published by Elsevier Ltd.

  19. A re-evaluation of a case-control model with contaminated controls for resource selection studies

    Treesearch

    Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski

    2013-01-01

    A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...

  20. A contemporary approach to the problem of determining physical parameters according to the results of measurements

    NASA Technical Reports Server (NTRS)

    Elyasberg, P. Y.

    1979-01-01

    The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.

  1. Test-retest reliability of the Middlesex Assessment of Mental State (MEAMS): a preliminary investigation in people with probable dementia.

    PubMed

    Powell, T; Brooker, D J; Papadopolous, A

    1993-05-01

    Relative and absolute test-retest reliability of the MEAMS was examined in 12 subjects with probable dementia and 12 matched controls. Relative reliability was good. Measures of absolute reliability showed scores changing by up to 3 points over an interval of a week. A version effect was found to be in evidence.

  2. Multi-analyte analysis of saliva biomarkers as predictors of periodontal and pre-implant disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, Thomas; Giannobile, William V; Herr, Amy E

    The present invention relates to methods of measuring biomarkers to determine the probability of a periodontal and/or peri-implant disease. More specifically, the invention provides a panel of biomarkers that, when used in combination, can allow determination of the probability of a periodontal and/or peri-implant disease state with extremely high accuracy.

  3. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  4. Functional-diversity indices can be driven by methodological choices and species richness.

    PubMed

    Poos, Mark S; Walker, Steven C; Jackson, Donald A

    2009-02-01

    Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.

  5. Detection performance in clutter with variable resolution

    NASA Astrophysics Data System (ADS)

    Schmieder, D. E.; Weathersby, M. R.

    1983-07-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  6. An Experiment Quantifying The Effect Of Clutter On Target Detection

    NASA Astrophysics Data System (ADS)

    Weathersby, Marshall R.; Schmieder, David E.

    1985-01-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  7. Automated measurement of spatial preference in the open field test with transmitted lighting.

    PubMed

    Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A

    2008-05-30

    New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.

  8. Rating competitors before tournament starts: How it's affecting team progression in a soccer tournament

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini

    2014-12-01

    In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.

  9. Computing Real-time Streamflow Using Emerging Technologies: Non-contact Radars and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.

    2015-12-01

    Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.

  10. Discharge rate measurements for Micromegas detectors in the presence of a longitudinal magnetic field

    NASA Astrophysics Data System (ADS)

    Moreno, B.; Aune, S.; Ball, J.; Charles, G.; Giganon, A.; Konczykowski, P.; Lahonde-Hamdoun, C.; Moutarde, H.; Procureur, S.; Sabatié, F.

    2011-10-01

    We present first discharge rate measurements for Micromegas detectors in the presence of a high longitudinal magnetic field in the GeV kinematical region. Measurements were performed by using two Micromegas detectors and a photon beam impinging a CH 2 target in the Hall B of the Jefferson Laboratory. One detector was equipped with an additional GEM foil, and a reduction of the discharge probability by two orders of magnitude compared to the stand-alone Micromegas was observed. The detectors were placed in the FROST solenoid providing a longitudinal magnetic field up to 5 T. It allowed for precise measurements of the discharge probability dependence with a diffusion-reducing magnetic field. Between 0 and 5 T, the discharge probability increased by a factor of 10 for polar angles between 19° and 34°. A GEANT4-based simulation developed for sparking rate calculation was calibrated against these data in order to predict the sparking rate in a high longitudinal magnetic field environment. This simulation is then used to investigate the possible use of Micromegas in the Forward Vertex Tracker (FVT) of the future CLAS12 spectrometer. In the case of the FVT a sparking rate of 1 Hz per detector was obtained at the anticipated CLAS12 luminosity.

  11. A Possible Operational Motivation for the Orthocomplementation in Quantum Structures

    NASA Astrophysics Data System (ADS)

    D'Hooghe, Bart

    2010-11-01

    In the foundations of quantum mechanics Gleason’s theorem dictates the uniqueness of the state transition probability via the inner product of the corresponding state vectors in Hilbert space, independent of which measurement context induces this transition. We argue that the state transition probability should not be regarded as a secondary concept which can be derived from the structure on the set of states and properties, but instead should be regarded as a primitive concept for which measurement context is crucial. Accordingly, we adopt an operational approach to quantum mechanics in which a physical entity is defined by the structure of its set of states, set of properties and the possible (measurement) contexts which can be applied to this entity. We put forward some elementary definitions to derive an operational theory from this State-COntext-Property (SCOP) formalism. We show that if the SCOP satisfies a Gleason-like condition, namely that the state transition probability is independent of which measurement context induces the change of state, then the lattice of properties is orthocomplemented, which is one of the ‘quantum axioms’ used in the Piron-Solèr representation theorem for quantum systems. In this sense we obtain a possible physical meaning for the orthocomplementation widely used in quantum structures.

  12. Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls

    PubMed Central

    Chae, Jeongsook; Jin, Yong; Sung, Yunsick

    2018-01-01

    Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641

  13. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  14. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  15. Lightning Strike Peak Current Probabilities as Related to Space Shuttle Operations

    NASA Technical Reports Server (NTRS)

    Johnson, Dale L.; Vaughan, William W.

    2000-01-01

    A summary is presented of basic lightning characteristics/criteria applicable to current and future aerospace vehicles. The paper provides estimates on the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating the probabilities of launch vehicles/objects being struck by lightning. This paper presents a summary of these results.

  16. Probabilistic Risk Analysis of Run-up and Inundation in Hawaii due to Distant Tsunamis

    NASA Astrophysics Data System (ADS)

    Gica, E.; Teng, M. H.; Liu, P. L.

    2004-12-01

    Risk assessment of natural hazards usually includes two aspects, namely, the probability of the natural hazard occurrence and the degree of damage caused by the natural hazard. Our current study is focused on the first aspect, i.e., the development and evaluation of a methodology that can predict the probability of coastal inundation due to distant tsunamis in the Pacific Basin. The calculation of the probability of tsunami inundation could be a simple statistical problem if a sufficiently long record of field data on inundation was available. Unfortunately, such field data are very limited in the Pacific Basin due to the reason that field measurement of inundation requires the physical presence of surveyors on site. In some areas, no field measurements were ever conducted in the past. Fortunately, there are more complete and reliable historical data on earthquakes in the Pacific Basin partly because earthquakes can be measured remotely. There are also numerical simulation models such as the Cornell COMCOT model that can predict tsunami generation by an earthquake, propagation in the open ocean, and inundation onto a coastal land. Our objective is to develop a methodology that can link the probability of earthquakes in the Pacific Basin with the inundation probability in a coastal area. The probabilistic methodology applied here involves the following steps: first, the Pacific Rim is divided into blocks of potential earthquake sources based on the past earthquake record and fault information. Then the COMCOT model is used to predict the inundation at a distant coastal area due to a tsunami generated by an earthquake of a particular magnitude in each source block. This simulation generates a response relationship between the coastal inundation and an earthquake of a particular magnitude and location. Since the earthquake statistics is known for each block, by summing the probability of all earthquakes in the Pacific Rim, the probability of the inundation in a coastal area can be determined through the response relationship. Although the idea of the statistical methodology applied here is not new, this study is the first to apply it to study the probability of inundation caused by earthquake-generated distant tsunamis in the Pacific Basin. As a case study, the methodology is applied to predict the tsunami inundation risk in Hilo Bay in Hawaii. Since relatively more field data on tsunami inundation are available for Hilo Bay, this case study can help to evaluate the applicability of the methodology for predicting tsunami inundation risk in the Pacific Basin. Detailed results will be presented at the AGU meeting.

  17. Absolute measures of the completeness of the fossil record

    NASA Technical Reports Server (NTRS)

    Foote, M.; Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1999-01-01

    Measuring the completeness of the fossil record is essential to understanding evolution over long timescales, particularly when comparing evolutionary patterns among biological groups with different preservational properties. Completeness measures have been presented for various groups based on gaps in the stratigraphic ranges of fossil taxa and on hypothetical lineages implied by estimated evolutionary trees. Here we present and compare quantitative, widely applicable absolute measures of completeness at two taxonomic levels for a broader sample of higher taxa of marine animals than has previously been available. We provide an estimate of the probability of genus preservation per stratigraphic interval, and determine the proportion of living families with some fossil record. The two completeness measures use very different data and calculations. The probability of genus preservation depends almost entirely on the Palaeozoic and Mesozoic records, whereas the proportion of living families with a fossil record is influenced largely by Cenozoic data. These measurements are nonetheless highly correlated, with outliers quite explicable, and we find that completeness is rather high for many animal groups.

  18. Leptonic Unitarity Triangle and CP-Violation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2002-02-01

    The area of the unitarity triangle is a measure of CP-violation. We introduce the leptonic unitarity triangles and study their properties. We consider the possibility of reconstructing the unitarity triangle in future oscillation and non-oscillation experiments. A set of measurements is suggested which will, in principle, allow us to measure all sides of the triangle, and consequently to establish CP-violation. For different values of the CP-violating phase, {delta}{sub D}, the required accuracy of measurements is estimated. The key elements of the method include determination of |U{sub e3}| and studies of the {nu}{sub {mu}} - {nu}{sub {mu}} survival probability in oscillationsmore » driven by the solar mass splitting {Delta}m{sub sun}{sup 2}. We suggest additional astrophysical measurements which may help to reconstruct the triangle. The method of the unitarity triangle is complementary to the direct measurements of CP-asymmetry. It requires mainly studies of the survival probabilities and processes where oscillations are averaged or the coherence of the state is lost.« less

  19. Formal properties of the probability of fixation: identities, inequalities and approximations.

    PubMed

    McCandlish, David M; Epstein, Charles L; Plotkin, Joshua B

    2015-02-01

    The formula for the probability of fixation of a new mutation is widely used in theoretical population genetics and molecular evolution. Here we derive a series of identities, inequalities and approximations for the exact probability of fixation of a new mutation under the Moran process (equivalent results hold for the approximate probability of fixation under the Wright-Fisher process, after an appropriate change of variables). We show that the logarithm of the fixation probability has particularly simple behavior when the selection coefficient is measured as a difference of Malthusian fitnesses, and we exploit this simplicity to derive inequalities and approximations. We also present a comprehensive comparison of both existing and new approximations for the fixation probability, highlighting those approximations that induce a reversible Markov chain when used to describe the dynamics of evolution under weak mutation. To demonstrate the power of these results, we consider the classical problem of determining the total substitution rate across an ensemble of biallelic loci and prove that, at equilibrium, a strict majority of substitutions are due to drift rather than selection. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Probability of assertive behaviour, interpersonal anxiety and self-efficacy of South African registered dietitians.

    PubMed

    Paterson, Marie; Green, J M; Basson, C J; Ross, F

    2002-02-01

    There is little information on the probability of assertive behaviour, interpersonal anxiety and self-efficacy in the literature regarding dietitians. The objective of this study was to establish baseline information of these attributes and the factors affecting them. Questionnaires collecting biographical information and self-assessment psychometric scales measuring levels of probability of assertiveness, interpersonal anxiety and self-efficacy were mailed to 350 subjects, who comprised a random sample of dietitians registered with the Health Professions Council of South Africa. Forty-one per cent (n=145) of the sample responded. Self-assessment inventory results were compared to test levels of probability of assertive behaviour, interpersonal anxiety and self-efficacy. The inventory results were compared with the biographical findings to establish statistical relationships between the variables. The hypotheses were formulated before data collection. Dietitians had acceptable levels of probability of assertive behaviour and interpersonal anxiety. The probability of assertive behaviour was significantly lower than the level noted in the literature and was negatively related to interpersonal anxiety and positively related to self-efficacy.

  1. The Birth Memories and Recall Questionnaire (BirthMARQ): development and evaluation

    PubMed Central

    2014-01-01

    Background Childbirth is a challenging and emotive experience that is accompanied by strong positive and/or negative emotions. Memories of birth may be associated with how women cognitively process birth events postpartum and potentially their adaptation to parenthood. Characteristics of memories for birth may also be associated with postnatal psychological wellbeing. This paper reports the development and evaluation of a questionnaire to measure characteristics of memories of childbirth and to examine the relationship between memories for birth and mental health. Methods The Birth Memories and Recall Questionnaire (BirthMARQ) was developed by generating items from literature reviews and general measures of memory characteristics to cover dimensions relevant to childbirth. Fifty nine items were administered to 523 women in the first year after childbirth (M = 23.7 weeks) as part of an online study of childbirth. Validity of the final scale was checked by examining differences between women with and without probable depression and PTSD. Results Principal components analysis identified 23 items representing six aspects of memory accounting for 64% of the variance. These were: Emotional memory, Centrality of memory to identity, Coherence, Reliving, Involuntary recall, and Sensory memory. Reliability was good (M alpha = .80). Women with probable depression or PTSD reported more emotional memory, centrality of memories and involuntary recall. Women with probable depression also reported more reliving, and those with probable PTSD reported less coherence and sensory memory. Conclusion The results suggest the BirthMARQ is a coherent and valid measure of the characteristics of memory for childbirth which may be important in postnatal mood and psychopathology. While further testing of its reliability and validity is needed, it is a measure capable of becoming a valuable tool for examining memory characteristics in the important context of childbirth. PMID:24950589

  2. The Birth Memories and Recall Questionnaire (BirthMARQ): development and evaluation.

    PubMed

    Foley, Suzanne; Crawley, Rosalind; Wilkie, Stephanie; Ayers, Susan

    2014-06-20

    Childbirth is a challenging and emotive experience that is accompanied by strong positive and/or negative emotions. Memories of birth may be associated with how women cognitively process birth events postpartum and potentially their adaptation to parenthood. Characteristics of memories for birth may also be associated with postnatal psychological wellbeing. This paper reports the development and evaluation of a questionnaire to measure characteristics of memories of childbirth and to examine the relationship between memories for birth and mental health. The Birth Memories and Recall Questionnaire (BirthMARQ) was developed by generating items from literature reviews and general measures of memory characteristics to cover dimensions relevant to childbirth. Fifty nine items were administered to 523 women in the first year after childbirth (M = 23.7 weeks) as part of an online study of childbirth. Validity of the final scale was checked by examining differences between women with and without probable depression and PTSD. Principal components analysis identified 23 items representing six aspects of memory accounting for 64% of the variance. These were: Emotional memory, Centrality of memory to identity, Coherence, Reliving, Involuntary recall, and Sensory memory. Reliability was good (M alpha = .80). Women with probable depression or PTSD reported more emotional memory, centrality of memories and involuntary recall. Women with probable depression also reported more reliving, and those with probable PTSD reported less coherence and sensory memory. The results suggest the BirthMARQ is a coherent and valid measure of the characteristics of memory for childbirth which may be important in postnatal mood and psychopathology. While further testing of its reliability and validity is needed, it is a measure capable of becoming a valuable tool for examining memory characteristics in the important context of childbirth.

  3. Employment Outcome Ten Years after Moderate to Severe Traumatic Brain Injury: A Prospective Cohort Study.

    PubMed

    Grauwmeijer, Erik; Heijenbrok-Kal, Majanka H; Haitsma, Ian K; Ribbers, Gerard M

    2017-09-01

    The objective of this prospective cohort study was to evaluate the probability of employment and predictors of employment in patients with moderate- to- severe traumatic brain injury (TBI) over 10-year follow-up. One hundred nine patients (18-67 years) were included with follow-up measurements 3, 6, 12, 18, 24, and 36 months and 10 years post-TBI. Potential predictors of employment probability included patient characteristics, injury severity factors, functional outcome measured at discharge from the hospital with the Glasgow Outcome Scale (GOS), Barthel Index (BI), Functional Independence Measure (FIM), and the Functional Assessment Measure (FAM). Forty-eight patients (42%) completed the 10-year follow-up. Three months post-TBI, 12% were employed, which gradually, but significantly, increased to 57% after 2-years follow-up (p < 0.001), followed by a significant decrease to 43% (p = 0.041) after 10 years. Ten years post-TBI, we found that employed persons had less-severe TBI, shorter length of hospital stay (LOS), and higher scores on the GOS, BI, FIM, and FAM at hospital discharge than unemployed persons. No significant differences in age, sex, educational level, living with partner/family or not, pre-injury employment, professional category, psychiatric symptoms, or discharge destination were found. Longitudinal multivariable analysis showed that time, pre-injury employment, FAM, and LOS were independent predictors of employment probability. We concluded that employment probability 10 years after moderate or severe TBI is related to injury severity and pre-injury employment. Future studies on vocational rehabilitation should focus on modifiable factors and take into consideration the effects of national legislation and national labor market forces.

  4. Grading Practice as Valid Measures of Academic Achievement of Secondary Schools Students for National Development

    ERIC Educational Resources Information Center

    Chiekem, Enwefa

    2015-01-01

    Assigning grades is probably the most important measurement decision that classroom teachers makes. When teachers are provided with some measurement instruction, they still use subjective value judgments when assigning grades to students. This paper therefore, examines the grading practice as valid measures of academic achievement in secondary…

  5. Traffic control device conspicuity.

    DOT National Transportation Integrated Search

    2013-08-01

    The conspicuity of a traffic control device (TCD) is defined as the probability that the device will be noticed. However, there is no agreed-upon measure of what constitutes being noticed. Various measures have been suggested, including eye fixations...

  6. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  7. Tables of stark level transition probabilities and branching ratios in hydrogen-like atoms

    NASA Technical Reports Server (NTRS)

    Omidvar, K.

    1980-01-01

    The transition probabilities which are given in terms of n prime k prime and n k are tabulated. No additional summing or averaging is necessary. The electric quantum number k plays the role of the angular momentum quantum number l in the presence of an electric field. The branching ratios between stark levels are also tabulated. Necessary formulas for the transition probabilities and branching ratios are given. Symmetries are discussed and selection rules are given. Some disagreements for some branching ratios are found between the present calculation and the measurement of Mark and Wierl. The transition probability multiplied by the statistical weight of the initial state is called the static intensity J sub S, while the branching ratios are called the dynamic intensity J sub D.

  8. Dynamics of a Landau-Zener non-dissipative system with fluctuating energy levels

    NASA Astrophysics Data System (ADS)

    Fai, L. C.; Diffo, J. T.; Ateuafack, M. E.; Tchoffo, M.; Fouokeng, G. C.

    2014-12-01

    This paper considers a Landau-Zener (two-level) system influenced by a three-dimensional Gaussian and non-Gaussian coloured noise and finds a general form of the time dependent diabatic quantum bit (qubit) flip transition probabilities in the fast, intermediate and slow noise limits. The qubit flip probability is observed to mimic (for low-frequencies noise) that of the standard LZ problem. The qubit flip probability is also observed to be the measure of quantum coherence of states. The transition probability is observed to be tailored by non-Gaussian low-frequency noise and otherwise by Gaussian low-frequency coloured noise. Intermediate and fast noise limits are observed to alter the memory of the system in time and found to improve and control quantum information processing.

  9. Modeling and measurement of vesicle pools at the cone ribbon synapse: changes in release probability are solely responsible for voltage-dependent changes in release

    PubMed Central

    Thoreson, Wallace B.; Van Hook, Matthew J.; Parmelee, Caitlyn; Curto, Carina

    2015-01-01

    Post-synaptic responses are a product of quantal amplitude (Q), size of the releasable vesicle pool (N), and release probability (P). Voltage-dependent changes in presynaptic Ca2+ entry alter post-synaptic responses primarily by changing P but have also been shown to influence N. With simultaneous whole cell recordings from cone photoreceptors and horizontal cells in tiger salamander retinal slices, we measured N and P at cone ribbon synapses by using a train of depolarizing pulses to stimulate release and deplete the pool. We developed an analytical model that calculates the total pool size contributing to release under different stimulus conditions by taking into account the prior history of release and empirically-determined properties of replenishment. The model provided a formula that calculates vesicle pool size from measurements of the initial post-synaptic response and limiting rate of release evoked by a train of pulses, the fraction of release sites available for replenishment, and the time constant for replenishment. Results of the model showed that weak and strong depolarizing stimuli evoked release with differing probabilities but the same size vesicle pool. Enhancing intraterminal Ca2+ spread by lowering Ca2+ buffering or applying BayK8644 did not increase PSCs evoked with strong test steps showing there is a fixed upper limit to pool size. Together, these results suggest that light-evoked changes in cone membrane potential alter synaptic release solely by changing release probability. PMID:26541100

  10. Ozone-surface interactions: Investigations of mechanisms, kinetics, mass transport, and implications for indoor air quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, Glenn Charles

    1999-12-01

    In this dissertation, results are presented of laboratory investigations and mathematical modeling efforts designed to better understand the interactions of ozone with surfaces. In the laboratory, carpet and duct materials were exposed to ozone and measured ozone uptake kinetics and the ozone induced emissions of volatile organic compounds. To understand the results of the experiments, mathematical methods were developed to describe dynamic indoor aldehyde concentrations, mass transport of reactive species to smooth surfaces, the equivalent reaction probability of whole carpet due to the surface reactivity of fibers and carpet backing, and ozone aging of surfaces. Carpets, separated carpet fibers, andmore » separated carpet backing all tended to release aldehydes when exposed to ozone. Secondary emissions were mostly n-nonanal and several other smaller aldehydes. The pattern of emissions suggested that vegetable oils may be precursors for these oxidized emissions. Several possible precursors and experiments in which linseed and tung oils were tested for their secondary emission potential were discussed. Dynamic emission rates of 2-nonenal from a residential carpet may indicate that intermediate species in the oxidation of conjugated olefins can significantly delay aldehyde emissions and act as reservoir for these compounds. The ozone induced emission rate of 2-nonenal, a very odorous compound, can result in odorous indoor concentrations for several years. Surface ozone reactivity is a key parameter in determining the flux of ozone to a surface, is parameterized by the reaction probability, which is simply the probability that an ozone molecule will be irreversibly consumed when it strikes a surface. In laboratory studies of two residential and two commercial carpets, the ozone reaction probability for carpet fibers, carpet backing and the equivalent reaction probability for whole carpet were determined. Typically reaction probability values for these materials were 10 -7, 10 -5, and 10 -5 respectively. To understand how internal surface area influences the equivalent reaction probability of whole carpet, a model of ozone diffusion into and reaction with internal carpet components was developed. This was then used to predict apparent reaction probabilities for carpet. He combines this with a modified model of turbulent mass transfer developed by Liu, et al. to predict deposition rates and indoor ozone concentrations. The model predicts that carpet should have an equivalent reaction probability of about 10 -5, matching laboratory measurements of the reaction probability. For both carpet and duct materials, surfaces become progressively quenched (aging), losing the ability to react or otherwise take up ozone. He evaluated the functional form of aging and find that the reaction probability follows a power function with respect to the cumulative uptake of ozone. To understand ozone aging of surfaces, he developed several mathematical descriptions of aging based on two different mechanisms. The observed functional form of aging is mimicked by a model which describes ozone diffusion with internal reaction in a solid. He shows that the fleecy nature of carpet materials in combination with the model of ozone diffusion below a fiber surface and internal reaction may explain the functional form and the magnitude of power function parameters observed due to ozone interactions with carpet. The ozone induced aldehyde emissions, measured from duct materials, were combined with an indoor air quality model to show that concentrations of aldehydes indoors may approach odorous levels. He shows that ducts are unlikely to be a significant sink for ozone due to the low reaction probability in combination with the short residence time of air in ducts.« less

  11. Non-contact temperature measurement requirements for electronic materials processing

    NASA Technical Reports Server (NTRS)

    Lehoczky, S. L.; Szofran, F. R.

    1988-01-01

    The requirements for non-contact temperature measurement capabilities for electronic materials processing in space are assessed. Non-contact methods are probably incapable of sufficient accuracy for the actual absolute measurement of temperatures in most such applications but would be useful for imaging in some applications.

  12. Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung

    2017-04-01

    The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.

  13. Relationships of maternal folate and vitamin B12 status during pregnancy with perinatal depression: The GUSTO study.

    PubMed

    Chong, Mary F F; Wong, Jocelyn X Y; Colega, Marjorelee; Chen, Ling-Wei; van Dam, Rob M; Tan, Chuen Seng; Lim, Ai Lin; Cai, Shirong; Broekman, Birit F P; Lee, Yung Seng; Saw, Seang Mei; Kwek, Kenneth; Godfrey, Keith M; Chong, Yap Seng; Gluckman, Peter; Meaney, Michael J; Chen, Helen

    2014-08-01

    Studies in the general population have proposed links between nutrition and depression, but less is known about the perinatal period. Depletion of nutrient reserves throughout pregnancy and delayed postpartum repletion could increase the risk of perinatal depression. We examined the relationships of plasma folate and vitamin B12 concentrations during pregnancy with perinatal depression. At 26th-28th weeks of gestation, plasma folate and vitamin B12 were measured in women from the GUSTO mother-offspring cohort study in Singapore. Depressive symptoms were measured with the Edinburgh Postnatal Depression Scale (EPDS) during the same period and at 3-month postpartum. EPDS scores of ≥15 during pregnancy or ≥13 at postpartum were indicative of probable depression. Of 709 women, 7.2% (n = 51) were identified with probable antenatal depression and 10.4% (n = 74) with probable postnatal depression. Plasma folate concentrations were significantly lower in those with probable antenatal depression than those without (mean ± SD; 27.3 ± 13.8 vs 40.4 ± 36.5 nmol/L; p = 0.011). No difference in folate concentrations was observed in those with and without probable postnatal depression. In adjusted regression models, the likelihood of probable antenatal depression decreases by 0.69 for every unit variation (increase) in folate (OR = 0.69 per SD increase in folate; 95% CI: 0.52, 0.94). Plasma vitamin B12 concentrations were not associated with perinatal depression. Lower plasma folate status during pregnancy was associated with antenatal depression, but not with postnatal depression. Replication in other studies is needed to determine the direction of causality between low folate and antenatal depression. NCT01174875. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  15. Definition of the Neutrosophic Probability

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin

    2014-03-01

    Neutrosophic probability (or likelihood) [1995] is a particular case of the neutrosophic measure. It is an estimation of an event (different from indeterminacy) to occur, together with an estimation that some indeterminacy may occur, and the estimation that the event does not occur. The classical probability deals with fair dice, coins, roulettes, spinners, decks of cards, random works, while neutrosophic probability deals with unfair, imperfect such objects and processes. For example, if we toss a regular die on an irregular surface which has cracks, then it is possible to get the die stuck on one of its edges or vertices in a crack (indeterminate outcome). The sample space is in this case: {1, 2, 3, 4, 5, 6, indeterminacy}. So, the probability of getting, for example 1, is less than 1/6. Since there are seven outcomes. The neutrosophic probability is a generalization of the classical probability because, when the chance of determinacy of a stochastic process is zero, these two probabilities coincide. The Neutrosophic Probability that of an event A occurs is NP (A) = (ch (A) , ch (indetA) , ch (A ̲)) = (T , I , F) , where T , I , F are subsets of [0,1], and T is the chance that A occurs, denoted ch(A); I is the indeterminate chance related to A, ch(indetermA) ; and F is the chance that A does not occur, ch (A ̲) . So, NP is a generalization of the Imprecise Probability as well. If T, I, and F are crisp numbers then: - 0 <= T + I + F <=3+ . We used the same notations (T,I,F) as in neutrosophic logic and set.

  16. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  17. Probability of Damage to Sidewalks and Curbs by Street Trees in the Tropics

    Treesearch

    John K. Francis; Bernard R. Parresol; Juana Marin de Patino

    1996-01-01

    For 75 trees each of 12 species growing along streets in San Juan, Puerto Rico and Merida, Mexico, diameter at breast height and distance to sidewalk or curb was measured and damage (cracking or raising) was evaluated. Logistic analysis was used to construct a model to predict probability of damage to sidewalk or curb. Distance to the pavement, diameter of the tree,...

  18. Cluster State Quantum Computing

    DTIC Science & Technology

    2012-12-01

    probability that the desired target gate ATar has been faithfully implemented on the computational modes given a successful measurement of the ancilla...modes: () = �(†)� 2 2(†) , (3) since Tr ( ATar † ATar )=2Mc for a properly normalized target gate. As we are interested...optimization method we have developed maximizes the success probability S for a given target transformation ATar , for given ancilla resources, and for a

  19. Cluster State Quantum Computation

    DTIC Science & Technology

    2014-02-01

    information of relevance to the transformation. We define the fidelity as the probability that the desired target gate ATar has been faithfully...implemented on the computational modes given a successful measurement of the ancilla modes: 2 , (3) since Tr ( ATar † ATar )=2Mc for a properly normalized...photonic gates The optimization method we have developed maximizes the success probability S for a given target transformation ATar , for given

  20. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  1. Measurement of the Bs0-Bs0 oscillation frequency.

    PubMed

    Abulencia, A; Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Ben Haim, E; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chapman, J; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, P H; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cruz, A; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cyr, D; DaRonco, S; D'Auria, S; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'Orso, M; Delli Paoli, F; Demers, S; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Di Ruzza, B; Dionisi, C; Dittmann, J R; DiTuro, P; Dörr, C; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Ebina, K; Efron, J; Ehlers, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Frisch, H J; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garcia Sciveres, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giokaris, N; Giolo, K; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Hahn, K; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hatakeyama, K; Hauser, J; Hays, C; Heijboer, A; Heinemann, B; Heinrich, J; Herndon, M; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Kang, J; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kobayashi, H; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kovalev, A; Kraan, A; Kraus, J; Kravchenko, I; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Maki, T; Maksimovic, P; Malde, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Martin, A; Martin, V; Martínez, M; Maruyama, T; Mastrandrea, P; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; von der Mey, M; Miao, T; Miladinovic, N; Miles, J; Miller, R; Miller, J S; Mills, C; Milnik, M; Miquel, R; Mitra, A; Mitselmakher, G; Miyamoto, A; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Naganoma, J; Nahn, S; Nakano, I; Napier, A; Naumov, D; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Nurse, E; Ogawa, T; Oh, S H; Oh, Y D; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rakitin, A; Rappoccio, S; Ratnikov, F; Reisert, B; Rekovic, V; van Remortel, N; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Saltzberg, D; Sanchez, C; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Sjolin, J; Skiba, A; Slaughter, A J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spezziga, M; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tanimoto, N; Tecchio, M; Teng, P K; Terashi, K; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vaiciulis, A; Vallecorsa, S; Varganov, A; Vataga, E; Velev, G; Veramendi, G; Veszpremi, V; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Wan, Z; Wang, S M; Warburton, A; Waschke, S; Waters, D; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhang, X; Zhou, J; Zucchelli, S

    2006-08-11

    We present the first precise measurement of the Bs0-Bs0 oscillation frequency Deltams. We use 1 fb-1 of data from pp collisions at sqrts=1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. The sample contains signals of 3600 fully reconstructed hadronic Bs decays and 37,000 partially reconstructed semileptonic Bs decays. We measure the probability as a function of proper decay time that the Bs decays with the same, or opposite, flavor as the flavor at production, and we find a signal consistent with Bs0-Bs0 oscillations. The probability that random fluctuations could produce a comparable signal is 0.2%. Under the hypothesis that the signal is due to Bs0-Bs0 oscillations, we measure Deltams=17.31(-0.18)+0.33(stat)+/-0.07(syst) ps-1 and determine |Vtd/Vts|=0.208(-0.002)+0.001(expt)-0.006(+0.008)(theor).

  2. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks.

  3. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed Central

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    Objective: This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Methods: Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Results: Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks. PMID:26635652

  4. Prediction of shock initiation thresholds and ignition probability of polymer-bonded explosives using mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min

    2018-05-01

    The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.

  5. ATTITUDE FILTERING ON SO(3)

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2005-01-01

    A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.

  6. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  7. Impact of uncertainty in expected return estimation on stock price volatility

    NASA Astrophysics Data System (ADS)

    Kostanjcar, Zvonko; Jeren, Branko; Juretic, Zeljan

    2012-11-01

    We investigate the origin of volatility in financial markets by defining an analytical model for time evolution of stock share prices. The defined model is similar to the GARCH class of models, but can additionally exhibit bimodal behaviour in the supply-demand structure of the market. Moreover, it differs from existing Ising-type models. It turns out that the constructed model is a solution of a thermodynamic limit of a Gibbs probability measure when the number of traders and the number of stock shares approaches infinity. The energy functional of the Gibbs probability measure is derived from the Nash equilibrium of the underlying game.

  8. A Gaussian measure of quantum phase noise

    NASA Technical Reports Server (NTRS)

    Schleich, Wolfgang P.; Dowling, Jonathan P.

    1992-01-01

    We study the width of the semiclassical phase distribution of a quantum state in its dependence on the average number of photons (m) in this state. As a measure of phase noise, we choose the width, delta phi, of the best Gaussian approximation to the dominant peak of this probability curve. For a coherent state, this width decreases with the square root of (m), whereas for a truncated phase state it decreases linearly with increasing (m). For an optimal phase state, delta phi decreases exponentially but so does the area caught underneath the peak: all the probability is stored in the broad wings of the distribution.

  9. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  10. Relationship between cutoff frequency and accuracy in time-interval photon statistics applied to oscillating signals

    NASA Astrophysics Data System (ADS)

    Rebolledo, M. A.; Martinez-Betorz, J. A.

    1989-04-01

    In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.

  11. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  12. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  13. Brownian motion surviving in the unstable cubic potential and the role of Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Ornigotti, Luca; Ryabov, Artem; Holubec, Viktor; Filip, Radim

    2018-03-01

    The trajectories of an overdamped particle in a highly unstable potential diverge so rapidly, that the variance of position grows much faster than its mean. A description of the dynamics by moments is therefore not informative. Instead, we propose and analyze local directly measurable characteristics, which overcome this limitation. We discuss the most probable particle position (position of the maximum of the probability density) and the local uncertainty in an unstable cubic potential, V (x ) ˜x3 , both in the transient regime and in the long-time limit. The maximum shifts against the acting force as a function of time and temperature. Simultaneously, the local uncertainty does not increase faster than the observable shift. In the long-time limit, the probability density naturally attains a quasistationary form. We interpret this process as a stabilization via the measurement-feedback mechanism, the Maxwell demon, which works as an entropy pump. The rules for measurement and feedback naturally arise from the basic properties of the unstable dynamics. All reported effects are inherent in any unstable system. Their detailed understanding will stimulate the development of stochastic engines and amplifiers and, later, their quantum counterparts.

  14. Structural symmetry in evolutionary games.

    PubMed

    McAvoy, Alex; Hauert, Christoph

    2015-10-06

    In evolutionary game theory, an important measure of a mutant trait (strategy) is its ability to invade and take over an otherwise-monomorphic population. Typically, one quantifies the success of a mutant strategy via the probability that a randomly occurring mutant will fixate in the population. However, in a structured population, this fixation probability may depend on where the mutant arises. Moreover, the fixation probability is just one quantity by which one can measure the success of a mutant; fixation time, for instance, is another. We define a notion of homogeneity for evolutionary games that captures what it means for two single-mutant states, i.e. two configurations of a single mutant in an otherwise-monomorphic population, to be 'evolutionarily equivalent' in the sense that all measures of evolutionary success are the same for both configurations. Using asymmetric games, we argue that the term 'homogeneous' should apply to the evolutionary process as a whole rather than to just the population structure. For evolutionary matrix games in graph-structured populations, we give precise conditions under which the resulting process is homogeneous. Finally, we show that asymmetric matrix games can be reduced to symmetric games if the population structure possesses a sufficient degree of symmetry. © 2015 The Author(s).

  15. Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model

    PubMed Central

    Martins Carvalho, Fernando; de Araújo, Tânia Maria

    2016-01-01

    Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999

  16. Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle

    PubMed Central

    Frost, Anja; Renners, Eike; Hötter, Michael; Ostermann, Jörn

    2013-01-01

    An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction. PMID:23344378

  17. Structural symmetry in evolutionary games

    PubMed Central

    McAvoy, Alex; Hauert, Christoph

    2015-01-01

    In evolutionary game theory, an important measure of a mutant trait (strategy) is its ability to invade and take over an otherwise-monomorphic population. Typically, one quantifies the success of a mutant strategy via the probability that a randomly occurring mutant will fixate in the population. However, in a structured population, this fixation probability may depend on where the mutant arises. Moreover, the fixation probability is just one quantity by which one can measure the success of a mutant; fixation time, for instance, is another. We define a notion of homogeneity for evolutionary games that captures what it means for two single-mutant states, i.e. two configurations of a single mutant in an otherwise-monomorphic population, to be ‘evolutionarily equivalent’ in the sense that all measures of evolutionary success are the same for both configurations. Using asymmetric games, we argue that the term ‘homogeneous’ should apply to the evolutionary process as a whole rather than to just the population structure. For evolutionary matrix games in graph-structured populations, we give precise conditions under which the resulting process is homogeneous. Finally, we show that asymmetric matrix games can be reduced to symmetric games if the population structure possesses a sufficient degree of symmetry. PMID:26423436

  18. Isotropic probability measures in infinite dimensional spaces: Inverse problems/prior information/stochastic inversion

    NASA Technical Reports Server (NTRS)

    Backus, George

    1987-01-01

    Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub n :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity)(P sub n to the -1(B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.

  19. Causality in time-neutral cosmologies

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    1999-02-01

    Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.

  20. Cost efficient environmental survey paths for detecting continuous tracer discharges

    NASA Astrophysics Data System (ADS)

    Alendal, G.

    2017-07-01

    Designing monitoring programs for detecting potential tracer discharges from unknown locations is challenging. The high variability of the environment may camouflage the anticipated anisotropic signal from a discharge, and there are a number of discharge scenarios. Monitoring operations may also be costly, constraining the number of measurements taken. By assuming that a discharge is active, and a prior belief on the most likely seep location, a method that uses Bayes' theorem combined with discharge footprint predictions is used to update the probability map. Measurement locations with highest reduction in the overall probability of a discharge to be active can be identified. The relative cost between reallocating and measurements can be taken into account. Three different strategies are suggested to enable cost efficient paths for autonomous vessels.

Top