ERIC Educational Resources Information Center
Schutte, Anne R.; Spencer, John P.
2009-01-01
This study tested a dynamic field theory (DFT) of spatial working memory and an associated spatial precision hypothesis (SPH). Between 3 and 6 years of age, there is a qualitative shift in how children use reference axes to remember locations: 3-year-olds' spatial recall responses are biased toward reference axes after short memory delays, whereas…
Schutte, Anne R.; Spencer, John P.
2009-01-01
This study tested a dynamic field theory (DFT) of spatial working memory and an associated spatial precision hypothesis (SPH). Between three and six years of age there is a qualitative shift in how children use reference axes to remember locations: 3-year-olds’ spatial recall responses are biased toward reference axes after short memory delays, whereas 6-year-olds’ responses are biased away from reference axes. According to the DFT and the SPH, quantitative improvements over development in the precision of excitatory and inhibitory working memory processes lead to this qualitative shift. Simulations of the DFT in Experiment 1 predict that improvements in precision should cause the spatial range of targets attracted toward a reference axis to narrow gradually over development with repulsion emerging and gradually increasing until responses to most targets show biases away from the axis. Results from Experiment 2 with 3- to 5-year-olds support these predictions. Simulations of the DFT in Experiment 3 quantitatively fit the empirical results and offer insights into the neural processes underlying this developmental change. PMID:19968430
Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).
Tierney, Adam; Kraus, Nina
2014-01-01
Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.
Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data
Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.
1999-01-01
We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.
Yeh, Su-Ling; Liao, Hsin-I
2010-10-01
The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.
Measuring the Spin Correlation of Nuclear Muon Capture in HELIUM-3.
NASA Astrophysics Data System (ADS)
McCracken, Dorothy Jill
1996-06-01
We have completed the first measurement of the spin correlation of nuclear muon capture in ^3 He: mu^- + ^3He to nu _{mu} + ^3 H. From this spin correlation, we can extract the induced pseudoscalar form factor, F_{ rm p}, of the weak charged nuclear current. This form factor is not well known experimentally. If nuclear muon capture were a purely leptonic weak interaction, the current would have no pseudoscalar coupling, and therefore F_{rm p} arises from QCD contributions. Since ^3He is a fairly well understood system, a precise measurement of F_{rm p} could provide a direct test of the theories which describe QCD at low energies. This experiment was performed at TRIUMF in Vancouver, BC, using a muon beam. We stopped unpolarized muons in a laser polarized target filled with ^3 He and Rb vapor. The muons were captured into atomic orbitals, forming muonic ^3He which was then polarized via collisions with the optically pumped Rb vapor. When polarized muons undergo nuclear capture in ^3He, the total capture rate is proportional to (1 + {rm A_ {v}P_{v}cos} theta) where theta is the angle between the muon polarization and the triton recoil direction, P_{rm v} is the muon vector polarization and A_ {rm v} is the vector analyzing power. The partially conserved axial current hypothesis (PCAC) predicts that A_{rm v} = 0.524 +/- 0.006 Our measurement of A_{rm v} is in agreement with this prediction: A_{rm v } = 0.604 +/- 0.093 (stat.) _sp{-.142}{+.112}(syst.). This thesis will describe the design, construction, and operation of the device which simultaneously served as a polarized target and a gridded ion chamber. The ion chamber apparatus enabled us to identify recoil tritons as well as determine their direction of motion. The directional information was obtained by fitting the shapes of the pulses generated by the tritons. In addition, this thesis will describe in detail the analysis of these pulses which resulted in a measurement of the raw forward/backward asymmetry of the triton recoil direction. This asymmetry was measured to a precision of 11.5%. With the techniques employed in this experiment, a clear path exists to obtaining a precise measurement of the induced pseudoscalar coupling of the charged weak nuclear current. Plans for a future run, in which we will improve upon these techniques, are underway.
Attentional Capture by an Unannounced Color Singleton Depends on Expectation Discrepancy
ERIC Educational Resources Information Center
Horstmann, Gernot
2005-01-01
Eight experiments examined the conditions under which a color singleton that is presented for the 1st time without prior announcement captures attention. The main hypothesis is that an unannounced singleton captures attention to the extent that it deviates from expectations. This hypothesis was tested within a visual-search paradigm in which…
Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J
2015-01-01
Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus.
Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.
2015-01-01
Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Aging affects neural precision of speech encoding
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2012-01-01
Older adults frequently report they can hear what is said but cannot understand the meaning, especially in noise. This difficulty may arise from the inability to process rapidly changing elements of speech. Aging is accompanied by a general slowing of neural processing and decreased neural inhibition, both of which likely interfere with temporal processing in auditory and other sensory domains. Age-related reductions in inhibitory neurotransmitter levels and delayed neural recovery can contribute to decreases in the auditory system’s temporal precision. Decreased precision may lead to neural timing delays, reductions in neural response magnitude, and a disadvantage in processing the rapid acoustic changes in speech. The auditory brainstem response (ABR), a scalp-recorded electrical potential, is known for its ability to capture precise neural synchrony within subcortical auditory nuclei; therefore, we hypothesized that a loss of temporal precision results in subcortical timing delays and decreases in response consistency and magnitude. To assess this hypothesis, we recorded ABRs to the speech syllable /da/ in normal hearing younger (ages 18 to 30) and older adult humans (60 to 67). Older adults had delayed ABRs, especially in response to the rapidly changing formant transition, and greater response variability. We also found that older adults had decreased phase locking and smaller response magnitudes than younger adults. Taken together, our results support the theory that older adults have a loss of temporal precision in subcortical encoding of sound, which may account, at least in part, for their difficulties with speech perception. PMID:23055485
Effects of self-relevant cues and cue valence on autobiographical memory specificity in dysphoria.
Matsumoto, Noboru; Mochizuki, Satoshi
2017-04-01
Reduced autobiographical memory specificity (rAMS) is a characteristic memory bias observed in depression. To corroborate the capture hypothesis in the CaRFAX (capture and rumination, functional avoidance, executive capacity and control) model, we investigated the effects of self-relevant cues and cue valence on rAMS using an adapted Autobiographical Memory Test conducted with a nonclinical population. Hierarchical linear modelling indicated that the main effects of depression and self-relevant cues elicited rAMS. Moreover, the three-way interaction among valence, self-relevance, and depression scores was significant. A simple slope test revealed that dysphoric participants experienced rAMS in response to highly self-relevant positive cues and low self-relevant negative cues. These results partially supported the capture hypothesis in nonclinical dysphoria. It is important to consider cue valence in future studies examining the capture hypothesis.
Lange, Nicholas D; Buttaccio, Daniel R; Davelaar, Eddy J; Thomas, Rick P
2014-02-01
Research investigating top-down capture has demonstrated a coupling of working memory content with attention and eye movements. By capitalizing on this relationship, we have developed a novel methodology, called the memory activation capture (MAC) procedure, for measuring the dynamics of working memory content supporting complex cognitive tasks (e.g., decision making, problem solving). The MAC procedure employs briefly presented visual arrays containing task-relevant information at critical points in a task. By observing which items are preferentially fixated, we gain a measure of working memory content as the task evolves through time. The efficacy of the MAC procedure was demonstrated in a dynamic hypothesis generation task in which some of its advantages over existing methods for measuring changes in the contents of working memory over time are highlighted. In two experiments, the MAC procedure was able to detect the hypothesis that was retrieved and placed into working memory. Moreover, the results from Experiment 2 suggest a two-stage process following hypothesis retrieval, whereby the hypothesis undergoes a brief period of heightened activation before entering a lower activation state in which it is maintained for output. The results of both experiments are of additional general interest, as they represent the first demonstrations of top-down capture driven by participant-established WM content retrieved from long-term memory.
Signal enhancement, not active suppression, follows the contingent capture of visual attention.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
2017-02-01
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
NASA Astrophysics Data System (ADS)
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
Effects of dreissenids on monitoring and management of fisheries in western Lake Erie
Stapanian, Martin A.; Kocovsky, Patrick M.
2013-01-01
Water clarity increased in nearshore areas of western Lake Erie by the early-1990s mainly as a result of the filtering activities of dreissenid mussels (Dreissena spp.), which invaded in the mid-1980s. We hypothesized that increased water clarity would result in greater trawl avoidance and thus reduced ability to capture fish in bottom trawls during daytime compared to nighttime. We examined this hypothesis by summarizing three analyses on fish data collected in western Lake Erie. First, we used a two-tiered modeling approach on the ration (R) of catch per hour (CPH) of age-0 yellow perch (Perca flavencens Mitchell) at night to CPH during daytime in 1961-2005. The best a priori and a posteriori models indicated a shift to higher CPH at night (R > 1) between 1990 and 1991, which corresponded to 3 years after the dreissenid invasion and when water clarity noticeably increased at nearshore sites. Secondly, we examined effects of nighttime sampling on estimates of abundance of age-2 and older yellow perch, which form the basis for recommended allowable harvest (RAH). When data from night sampling were included in models that predict abundance of age-2 yellow perch from indices of abundance of age-0 and age-1 yellow perch, predicted abundance was lower and model precision, as measured by r-squared, was higher compared to models that excluded data collected at night. Furthermore, the use of only CPH data collected at night typically resulted in lower estimates of abundance and more precise models compared to models that included CPH data collected during both daytime and nighttime. Thirdly, we used presence/absence data from paired bottom trawl samples to calculate an index of capture probability (or catchability) to determine if our ability to capture the four most common benthic species in western Lake Erie was affected by dreissenid-caused increased water clarity. Three species of fish(white perch, Morone americana Gmelin; yellow perch; and trout-perch, Percopsis omiscomaycus Walbaum) had lower mean daytime catchability than nighttime catchability after dreissenids became established, which supported the hypothesis of greater trawl avoidance during daytime following establishment of dreissenids. Results from freshwater drum (Aplodinotus grunniens Rafinesque) were opposite those of the other three species, which may be a result of behavioral shifts due to freshwater drum feeding on dreissenids mussels. Collectively, these three studies suggest that dreissenids indirectly affected our ability to assess fish populations, which further affects estimates of fish densities and relationships between indices of abundance and true abundance.
Hardware accuracy counters for application precision and quality feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani
Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be outputmore » to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.« less
NASA Astrophysics Data System (ADS)
Nomaguch, Yutaka; Fujita, Kikuo
This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.
Catch as Catch Can: The History of the Theory of Gravitational Capture.
ERIC Educational Resources Information Center
Osipov, Y.
1992-01-01
Traces cosmogonic history of solar system from Laplace's hypothesis of revolving gas nebulae, to Newton's two-body problem with its mathematical impossibility of gravitational capture, to the isosceles three-body problem of Schmidt and Sitnikov with its notion of partial capture, and finally to the total capture model of Alexeyev verified by the…
DOT National Transportation Integrated Search
2016-04-01
The objective of the Dynamic Interrogative Data Capture (DIDC) algorithms and software is to optimize the capture and transmission of vehicle-based data under a range of dynamically configurable messaging strategies. The key hypothesis of DIDC is tha...
Hitts Law? A test of the relationship between information load and movement precision
NASA Technical Reports Server (NTRS)
Zaleski, M.; Moray, N.
1986-01-01
Recent technological developments have made viable a man-machine interface heavily dependent on graphics and pointing devices. This has led to new interest in classical reaction and movement time work by Human Factors specialists. Two experiments were designed and run to test the dependence of target capture time on information load (Hitt's Law) and movement precision (Fitts' Law). The proposed model linearly combines Hitt's and Fitts' results into a combination law which then might be called Hitts' Law. Subjects were required to react to stimuli by manipulating a joystick so as to cause a cursor to capture a target on a CRT screen. Response entropy and the relative precision of the capture movement were crossed in a factorial design and data obtained that were found to support the model.
ERIC Educational Resources Information Center
Kerzel, Dirk; Born, Sabine; Schonhammer, Josef
2012-01-01
A salient stimulus may interrupt visual search because of attentional capture. It has been shown that attentional capture occurs with a wide, but not with a small attentional window. We tested the hypothesis that capture depends more strongly on the shape of the attentional window than on its size. Search elements were arranged in two nested…
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.
2008-01-01
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632
Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.
Patel, Aniruddh D
2011-01-01
Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The "OPERA" hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focused attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for the observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.
Complete-arch accuracy of intraoral scanners.
Treesh, Joshua C; Liacouras, Peter C; Taft, Robert M; Brooks, Daniel I; Raiciulescu, Sorana; Ellert, Daniel O; Grant, Gerald T; Ye, Ling
2018-04-30
Intraoral scanners have shown varied results in complete-arch applications. The purpose of this in vitro study was to evaluate the complete-arch accuracy of 4 intraoral scanners based on trueness and precision measurements compared with a known reference (trueness) and with each other (precision). Four intraoral scanners were evaluated: CEREC Bluecam, CEREC Omnicam, TRIOS Color, and Carestream CS 3500. A complete-arch reference cast was created and printed using a 3-dimensional dental cast printer with photopolymer resin. The reference cast was digitized using a laboratory-based white light 3-dimensional scanner. The printed reference cast was scanned 10 times with each intraoral scanner. The digital standard tessellation language (STL) files from each scanner were then registered to the reference file and compared with differences in trueness and precision using a 3-dimensional modeling software. Additionally, scanning time was recorded for each scan performed. The Wilcoxon signed rank, Kruskal-Wallis, and Dunn tests were used to detect differences for trueness, precision, and scanning time (α=.05). Carestream CS 3500 had the lowest overall trueness and precision compared with Bluecam and TRIOS Color. The fourth scanner, Omnicam, had intermediate trueness and precision. All of the scanners tended to underestimate the size of the reference file, with exception of the Carestream CS 3500, which was more variable. Based on visual inspection of the color rendering of signed differences, the greatest amount of error tended to be in the posterior aspects of the arch, with local errors exceeding 100 μm for all scans. The single capture scanner Carestream CS 3500 had the overall longest scan times and was significantly slower than the continuous capture scanners TRIOS Color and Omnicam. Significant differences in both trueness and precision were found among the scanners. Scan times of the continuous capture scanners were faster than the single capture scanners. Published by Elsevier Inc.
Nickel for your thoughts: urey and the origin of the moon.
Brush, S G
1982-09-03
The theories of Harold C. Urey (1893-1981) on the origin of the moon are discussed in relation to earlier ideas, especially George Howard Darwin's fission hypothesis. Urey's espousal of the idea that the moon had been captured by the earth and has preserved information about the earliest history of the solar system led him to advocate a manned lunar landing. Results from the Apollo missions, in particular the deficiency of siderophile elements in the lunar crust, led him to abandon the capture selenogony and tentatively adopt the fission hypothesis.
Chin, Lijin; Moran, Jonathan A; Clarke, Charles
2010-04-01
*Three Bornean pitcher plant species, Nepenthes lowii, N. rajah and N. macrophylla, produce modified pitchers that 'capture' tree shrew faeces for nutritional benefit. Tree shrews (Tupaia montana) feed on exudates produced by glands on the inner surfaces of the pitcher lids and defecate into the pitchers. *Here, we tested the hypothesis that pitcher geometry in these species is related to tree shrew body size by comparing the pitcher characteristics with those of five other 'typical' (arthropod-trapping) Nepenthes species. *We found that only pitchers with large orifices and lids that are concave, elongated and oriented approximately at right angles to the orifice capture faeces. The distance from the tree shrews' food source (that is, the lid nectar glands) to the front of the pitcher orifice precisely matches the head plus body length of T. montana in the faeces-trapping species, and is a function of orifice size and the angle of lid reflexion. *Substantial changes to nutrient acquisition strategies in carnivorous plants may occur through simple modifications to trap geometry. This extraordinary plant-animal interaction adds to a growing body of evidence that Nepenthes represents a candidate model for adaptive radiation with regard to nitrogen sequestration strategies.
Transit value capture coordination : case studies, best practices, and recommendations.
DOT National Transportation Integrated Search
2015-02-17
This study is based on the hypothesis that coordination between transit capital planners, municipal taxation authorities, and private developers and stakeholders can be a benefit to transit capital projects that choose to use value capture as a fundi...
Influence of group size on the success of wolves hunting bison.
MacNulty, Daniel R; Tallian, Aimee; Stahler, Daniel R; Smith, Douglas W
2014-01-01
An intriguing aspect of social foraging behaviour is that large groups are often no better at capturing prey than are small groups, a pattern that has been attributed to diminished cooperation (i.e., free riding) in large groups. Although this suggests the formation of large groups is unrelated to prey capture, little is known about cooperation in large groups that hunt hard-to-catch prey. Here, we used direct observations of Yellowstone wolves (Canis lupus) hunting their most formidable prey, bison (Bison bison), to test the hypothesis that large groups are more cooperative when hunting difficult prey. We quantified the relationship between capture success and wolf group size, and compared it to previously reported results for Yellowstone wolves hunting elk (Cervus elaphus), a prey that was, on average, 3 times easier to capture than bison. Whereas improvement in elk capture success levelled off at 2-6 wolves, bison capture success levelled off at 9-13 wolves with evidence that it continued to increase beyond 13 wolves. These results are consistent with the hypothesis that hunters in large groups are more cooperative when hunting more formidable prey. Improved ability to capture formidable prey could therefore promote the formation and maintenance of large predator groups, particularly among predators that specialize on such prey.
Influence of Group Size on the Success of Wolves Hunting Bison
MacNulty, Daniel R.; Tallian, Aimee; Stahler, Daniel R.; Smith, Douglas W.
2014-01-01
An intriguing aspect of social foraging behaviour is that large groups are often no better at capturing prey than are small groups, a pattern that has been attributed to diminished cooperation (i.e., free riding) in large groups. Although this suggests the formation of large groups is unrelated to prey capture, little is known about cooperation in large groups that hunt hard-to-catch prey. Here, we used direct observations of Yellowstone wolves (Canis lupus) hunting their most formidable prey, bison (Bison bison), to test the hypothesis that large groups are more cooperative when hunting difficult prey. We quantified the relationship between capture success and wolf group size, and compared it to previously reported results for Yellowstone wolves hunting elk (Cervus elaphus), a prey that was, on average, 3 times easier to capture than bison. Whereas improvement in elk capture success levelled off at 2–6 wolves, bison capture success levelled off at 9–13 wolves with evidence that it continued to increase beyond 13 wolves. These results are consistent with the hypothesis that hunters in large groups are more cooperative when hunting more formidable prey. Improved ability to capture formidable prey could therefore promote the formation and maintenance of large predator groups, particularly among predators that specialize on such prey. PMID:25389760
ERIC Educational Resources Information Center
Cadogan, Peter
1983-01-01
Presents findings and conclusions about the origin of the moon, favoring the capture hypothesis of lunar origin. Advantage of the hypothesis is that it allows the moon to have been formed elsewhere, specifically in a hotter part of the solar nebula, accounting for chemical differences between earth and moon. (JN)
Tracking the location of visuospatial attention in a contingent capture paradigm.
Leblanc, Emilie; Prime, David J; Jolicoeur, Pierre
2008-04-01
Currently, there is considerable controversy regarding the degree to which top-down control can affect attentional capture by salient events. According to the contingent capture hypothesis, attentional capture by a salient stimulus is contingent on a match between the properties of the stimulus and top-down attentional control settings. In contrast, bottom-up saliency accounts argue that the initial capture of attention is determined solely by the relative salience of the stimulus, and the effect of top-down attentional control is limited to effects on the duration of attentional engagement on the capturing stimulus. In the present study, we tested these competing accounts by utilizing the N2pc event-related potential component to track the locus of attention during an attentional capture task. The results were completely consistent with the contingent capture hypothesis: An N2pc wave was elicited only by distractors that possessed the target-defining attribute. In a second experiment, we expanded upon this finding by exploring the effect of target-distractor similarity on the duration that attention dwells at the distractor location. In this experiment, only distractors possessing the target-defining attribute (color) captured visuospatial attention to their location and the N2pc increased in duration and in magnitude when the capture distractor also shared a second target attribute (category membership). Finally, in three additional control experiments, we replicated the finding of an N2pc generated by distractors, only if they shared the target-defining attribute. Thus, our results demonstrate that attentional control settings influence both which stimuli attract attention and to what extent they are processed.
Kerzel, Dirk; Born, Sabine; Schönhammer, Josef
2012-12-01
A salient stimulus may interrupt visual search because of attentional capture. It has been shown that attentional capture occurs with a wide, but not with a small attentional window. We tested the hypothesis that capture depends more strongly on the shape of the attentional window than on its size. Search elements were arranged in two nested rings. The ring containing the search target remained fixed, while a salient color singleton occurred either in the same or in the other ring. We observed that color singletons only disrupted search when shown in the same ring as the search target. It is important to note that, when focusing on the outer array, which presumably required a larger attentional window, singletons on the inner array did not capture attention. In contrast to the original attentional window hypothesis, our results show that attentional capture does not always occur with a large attentional window. Rather, attention can be flexibly allocated to the set of relevant stimulus locations and attentional capture is confined to the attended locations. Further experiments showed that attention was allocated to search elements that were perceptually grouped into "whole" or "Gestalt"-like objects, which prevented attentional capture from nearby locations. However, when attention was allocated to noncontiguous locations that did not form a perceptual Gestalt, nearby locations elicited attentional capture. Perceptual grouping could be based on a combination of color and position, but not on color alone. Thus, the allocation of attention to Gestalt-like objects that were jointly defined by similarity and proximity prevented attentional capture from nearby locations.
A farm-level precision land management framework based on integer programming
Li, Qi; Hu, Guiping; Jubery, Talukder Zaki; Ganapathysubramanian, Baskar
2017-01-01
Farmland management involves several planning and decision making tasks including seed selection and irrigation management. A farm-level precision farmland management model based on mixed integer linear programming is proposed in this study. Optimal decisions are designed for pre-season planning of crops and irrigation water allocation. The model captures the effect of size and shape of decision scale as well as special irrigation patterns. The authors illustrate the model with a case study on a farm in the state of California in the U.S. and show the model can capture the impact of precision farm management on profitability. The results show that threefold increase of annual net profit for farmers could be achieved by carefully choosing irrigation and seed selection. Although farmers could increase profits by applying precision management to seed or irrigation alone, profit increase is more significant if farmers apply precision management on seed and irrigation simultaneously. The proposed model can also serve as a risk analysis tool for farmers facing seasonal irrigation water limits as well as a quantitative tool to explore the impact of precision agriculture. PMID:28346499
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis
ERIC Educational Resources Information Center
Drougas, Anne; Harrington, Steve
2011-01-01
This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…
Anderson, Brian A; Folk, Charles L
2012-08-01
The study of attentional capture has provided a rich context for assessing the relative influence of top-down and bottom-up factors in visual perception. Some have argued that attentional capture by a salient, irrelevant stimulus is contingent on top-down attentional set (e.g., Folk, Remington, & Johnston, Journal of Experimental Psychology: Human Perception and Performance 18:1030-1044, 1992). Others, however, have argued that capture is driven entirely by bottom-up salience and that top-down factors influence the postallocation speed of disengagement from the irrelevant stimulus (e.g., Theeuwes, Acta Psychologica 135:77-99, 2010a). In support of this speed-of-disengagement hypothesis, recent findings from the modified spatial-cuing paradigm show that cues carrying a no-go target property produce reverse, or negative, cuing effects, consistent with inhibition of the cue location from which attention has been very quickly disengaged (Belopolsky, Schreij, & Theeuwes, Perception, & Psychophysics, 72, 326-341, 2010). Across six experiments, we show that this inhibitory process can be dissociated from shifts of spatial attention and is, thus, not a reliable marker of capture. We conclude that the data are inconsistent with the predictions of the disengagement hypothesis.
Beekman, Madeleine; Doyen, Laurent; Oldroyd, Benjamin P
2005-12-01
Honey bee foragers communicate the direction and distance of both food sources and new nest sites to nest mates by means of a symbolic dance language. Interestingly, the precision by which dancers transfer directional information is negatively correlated with the distance to the advertised food source. The 'tuned-error' hypothesis suggests that colonies benefit from this imprecision as it spreads recruits out over a patch of constant size irrespective of the distance to the advertised site. An alternative to the tuned-error hypothesis is that dancers are physically incapable of dancing with great precision for nearby sources. Here we revisit the tuned-error hypothesis by studying the change in dance precision with increasing foraging distance over relatively short distances while controlling for environmental influences. We show that bees indeed increase their dance precision with the increase in foraging distance. However, we also show that dance performed by swarm-scouts for a nearby (30 m) nest site, where there could be no benefit to imprecision, are either without or with only limited directional information. This result suggests that imprecision in dance communication is caused primarily by physical constraints in the ability of dancers to turn around quickly enough when the advertised site is nearby.
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Bahdanovich, Rynat; Pham, Phu
2017-09-01
Precise calculation of energy release in a nuclear reactor is necessary to obtain the correct spatial power distribution and predict characteristics of burned nuclear fuel. In this work, previously developed method for calculation neutron-capture reactions - capture component - contribution in effective energy release in a fuel core of nuclear reactor is discussed. The method was improved and implemented to the different models of VVER-1000 reactor developed for MCU 5 and MCNP 4 computer codes. Different models of equivalent cell and fuel assembly in the beginning of fuel cycle were calculated. These models differ by the geometry, fuel enrichment and presence of burnable absorbers. It is shown, that capture component depends on fuel enrichment and presence of burnable absorbers. Its value varies for different types of hot fuel assemblies from 3.35% to 3.85% of effective energy release. Average capture component contribution in effective energy release for typical serial fresh fuel of VVER-1000 is 3.5%, which is 7 MeV/fission. The method will be used in future to estimate the dependency of capture energy on fuel density, burn-up, etc.
A field test for differences in condition among trapped and shot mallards
Reinecke, K.J.; Shaiffer, C.W.
1988-01-01
We tested predictions from the condition bias hypothesis (Weatherland and Greenwood 1981) regarding the effects of sampling methods of body weights of mallards (Anas platyrhynchos) at White River National Wildlife Refuge (WRNWR), Arkansas, during 24 November-8 December 1985. Body weights of 84 mallards caught with unbaited rocket nets in a natural wetland were used as experimental controls and compared to the body weights of 70 mallards captured with baited rocket nets, 86 mallards captured with baited swim-in traps, and 130 mallards killed by hunters. We found no differences (P > 0.27) in body weight among sampling methods, but body condition (wt/wing length) of the birds killed by hunters was less (P 0.75 for differences > 50 g. The condition bias hypothesis probably applies to ducks killed by hunters but not to trapping operations when substantial (> 20 at 1 time) numbers of birds are captured.
Precision oncology: origins, optimism, and potential.
Prasad, Vinay; Fojo, Tito; Brada, Michael
2016-02-01
Imatinib, the first and arguably the best targeted therapy, became the springboard for developing drugs aimed at molecular targets deemed crucial to tumours. As this development unfolded, a revolution in the speed and cost of genetic sequencing occurred. The result--an armamentarium of drugs and an array of molecular targets--set the stage for precision oncology, a hypothesis that cancer treatment could be markedly improved if therapies were guided by a tumour's genomic alterations. Drawing lessons from the biological basis of cancer and recent empirical investigations, we take a more measured view of precision oncology's promise. Ultimately, the promise is not our concern, but the threshold at which we declare success. We review reports of precision oncology alongside those of precision diagnostics and novel radiotherapy approaches. Although confirmatory evidence is scarce, these interventions have been widely endorsed. We conclude that the current path will probably not be successful or, at a minimum, will have to undergo substantive adjustments before it can be successful. For the sake of patients with cancer, we hope one form of precision oncology will deliver on its promise. However, until confirmatory studies are completed, precision oncology remains unproven, and as such, a hypothesis in need of rigorous testing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wilkinson, Nicholas M.; Metta, Giorgio
2014-01-01
Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modeled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of oculomotor postural control. We identify signatures reminiscent of a certain flavor of transient neurodynamics; toric traveling waves which rotate around a central phase singularity. Spiral waves play an organizational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales. PMID:24616670
Multiscale model of a freeze–thaw process for tree sap exudation
Graf, Isabell; Ceseri, Maurizio; Stockie, John M.
2015-01-01
Sap transport in trees has long fascinated scientists, and a vast literature exists on experimental and modelling studies of trees during the growing season when large negative stem pressures are generated by transpiration from leaves. Much less attention has been paid to winter months when trees are largely dormant but nonetheless continue to exhibit interesting flow behaviour. A prime example is sap exudation, which refers to the peculiar ability of sugar maple (Acer saccharum) and related species to generate positive stem pressure while in a leafless state. Experiments demonstrate that ambient temperatures must oscillate about the freezing point before significantly heightened stem pressures are observed, but the precise causes of exudation remain unresolved. The prevailing hypothesis attributes exudation to a physical process combining freeze–thaw and osmosis, which has some support from experimental studies but remains a subject of active debate. We address this knowledge gap by developing the first mathematical model for exudation, while also introducing several essential modifications to this hypothesis. We derive a multiscale model consisting of a nonlinear system of differential equations governing phase change and transport within wood cells, coupled to a suitably homogenized equation for temperature on the macroscale. Numerical simulations yield stem pressures that are consistent with experiments and provide convincing evidence that a purely physical mechanism is capable of capturing exudation. PMID:26400199
Multiscale model of a freeze-thaw process for tree sap exudation.
Graf, Isabell; Ceseri, Maurizio; Stockie, John M
2015-10-06
Sap transport in trees has long fascinated scientists, and a vast literature exists on experimental and modelling studies of trees during the growing season when large negative stem pressures are generated by transpiration from leaves. Much less attention has been paid to winter months when trees are largely dormant but nonetheless continue to exhibit interesting flow behaviour. A prime example is sap exudation, which refers to the peculiar ability of sugar maple (Acer saccharum) and related species to generate positive stem pressure while in a leafless state. Experiments demonstrate that ambient temperatures must oscillate about the freezing point before significantly heightened stem pressures are observed, but the precise causes of exudation remain unresolved. The prevailing hypothesis attributes exudation to a physical process combining freeze-thaw and osmosis, which has some support from experimental studies but remains a subject of active debate. We address this knowledge gap by developing the first mathematical model for exudation, while also introducing several essential modifications to this hypothesis. We derive a multiscale model consisting of a nonlinear system of differential equations governing phase change and transport within wood cells, coupled to a suitably homogenized equation for temperature on the macroscale. Numerical simulations yield stem pressures that are consistent with experiments and provide convincing evidence that a purely physical mechanism is capable of capturing exudation. © 2015 The Author(s).
On the Electrophysiological Evidence for the Capture of Visual Attention
ERIC Educational Resources Information Center
McDonald, John J.; Green, Jessica J.; Jannati, Ali; Di Lollo, Vincent
2013-01-01
The presence of a salient distractor interferes with visual search. According to the salience-driven selection hypothesis, this interference is because of an initial deployment of attention to the distractor. Three event-related potential (ERP) findings have been regarded as evidence for this hypothesis: (a) salient distractors were found to…
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Singer, S. Fred
A coherent account is presented here based on the hypothesis that the moon formed separately in a heliocentric orbit similar to the earth's and was later captured by the earth. The adoption of this hypothesis, together with the observed depletion of iron in the moon, sets some important constraints on the condensation and agglomeration phenomena in the primeval solar nebula that led to the formation of planetesimals, and ultimately to planets. Capture of the moon also defines a severe heating event whereby the earth's kinetic energy of rotation is largely dissipated internally by the mechanism of tidal friction. From this melting event dates the geologic, atmospheric, and oceanic history of the earth. An attempt is made to account for the unique development of the earth, especially in relation to Mars and Venus, its neighboring planets. A capture origin of the moon that employs a 'push-pull' tidal theory does not strain the laws of physics, involves a minimum of ad hoc assumptions, and has a probability that is commensurate with the evidence of the existence of a unique moon.
NASA Astrophysics Data System (ADS)
Bellerive, Nathalie
The research project hypothesis is that CO2 capture and sequestration technologies (CSC) leads to a significant decrease in global warming, but increases the impact of all other aspects of the study. This is because other processes used for CO2 capture and sequestration require additional quantities of raw materials and energy. Two other objectives are described in this project. The first is the modeling of an Integrated Gasification Combined Cycle power plant for which there is no known generic data. The second is to select the right hypothesis regarding electrical production technologies, CO2 capture, compression and transportation by pipeline and finally sequestration. "Life Cycle Assessment" (LCA) analyses were chosen for this research project. LCA is an exhaustive quantitative method used to evaluate potential environmental impacts associated with a product, a service or an activity from resource extraction to waste elimination. This tool is governed by ISO 14 040 through ISO 14 049 and is sustained by the Society of Environmental Toxicology and Chemistry (SETAC) and the United Nations Environment Program (UNEP). Two power plants were studied, the Integrated Gasification Combined Cycle (IGCC) power plant and the Natural Gas Combined Cycle (NGCC) power plant. In order to sequester CO2 in geological formation, it is necessary to extract CO2from emission flows. For the IGCC power plant, CO 2 was captured before the burning phase. For the NGCC power plant, the capture was done during the afterburning phase. Once the CO2 was isolated, it was compressed and directed through a transportation pipe 1 000 km in length on the ground surface and in the sea. It is hypothesized that the power plant is 300 km from the shore and the sequestration platform 700 km from France's shore, in the North Sea. The IGCC power plant modeling and data selection regarding CO2 capture and sequestration were done by using primary data from the industry and the Ecoinvent generic database (Version 1.2). This database was selected due to its European source. Finally, technical calculations and literature were used to complete the data inventory. This was validated by electrical experts in order to increase data and modeling precision. Results were similar for IGCC and NGCC power plants using Impact 2002+, an impacts analysis method. Global warming potential decreased by 67% with the implementation of CO2 capture and sequestration compared to systems without CSC. Results for all others impacts categories, demonstrated an increase from 16% to 116% in relative proportions compared to systems without CSC. The main contributor was the additional quantity of energy required to operate CO2 capture and compression facilities. This additional energy negatively affected the power plant's global efficiency because of the increase in the quantity of fossil fuel that needed to be extracted and consumed. The increase in other impacts was mainly due to additional electricity, fossil fuel (for extracting, treatment and transportation) and additional emissions generated during power plant operations. A scenario analysis was done to study the sensitivity and variability of uncertain data during the software modeling process of a power plant. Data on power plant efficiency is the most variable and sensitive during modeling, followed by the length of the transportation pipe and the leaking rate during CO2 sequestration. This result analysis is interesting because it led to the maximum efficiency scenario with capture (with a short CO 2 transportation distance and a low leaking rate) obtaining better results on all impact category indicators, compared to the minimum efficiency scenario without capture. In fact, positive results on all category indicators were possible during the system comparison between the two cases (with and without capture). (Abstract shortened by UMI.)
Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision
Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson
2014-01-01
The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339
Functions of Marijuana Use in College Students
ERIC Educational Resources Information Center
Bates, Julie K.; Accordino, Michael P.; Hewes, Robert L.
2010-01-01
Hierarchical regression analysis was used to test the hypothesis that specific functional factors of marijuana use would predict past 30-day marijuana use in 425 college students more precisely than demographic variables alone. This hypothesis was confirmed. Functional factors of personal/physical enhancement as well as activity enhancement were…
ERIC Educational Resources Information Center
Burnham, Bryan R.; Neely, James H.
2008-01-01
C. L. Folk, R. W. Remington, and J. C. Johnston's (1992) contingent involuntary orienting hypothesis states that a salient visual feature will involuntarily capture attention only when the observer's attentional set includes similar features. In four experiments, when the target's relevant feature was its being an abruptly onset singleton,…
Stimulus-Driven Attentional Capture by a Static Discontinuity between Perceptual Groups
ERIC Educational Resources Information Center
Burnham, Bryan R.; Neely, James H.; Naginsky, Yelena; Thomas, Matthew
2010-01-01
After C. L. Folk, R. W. Remington, and J. C. Johnston (1992) proposed their contingent-orienting hypothesis, there has been an ongoing debate over whether purely stimulus-driven attentional capture can occur for visual events that are salient by virtue of a distinctive static property (as opposed to a dynamic property such as abrupt onset). The…
Attentional priority determines working memory precision.
Klyszejko, Zuzanna; Rahmati, Masih; Curtis, Clayton E
2014-12-01
Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam
2015-07-01
Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.
Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica
2014-06-01
Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.
Halstead, Brian J.; Skalos, Shannon M.; Casazza, Michael L.; Wylie, Glenn D.
2015-01-01
Detection and capture probabilities for giant gartersnakes (Thamnophis gigas) are very low, and successfully evaluating the effects of variables or experimental treatments on giant gartersnake populations will require greater detection and capture probabilities than those that had been achieved with standard trap designs. Previous research identified important trap modifications that can increase the probability of snakes entering traps and help prevent the escape of captured snakes. The purpose of this study was to quantify detection and capture probabilities obtained using the most successful modification to commercially available traps to date (2015), and examine the ability of realized detection and capture probabilities to achieve benchmark levels of precision in occupancy and capture-mark-recapture studies.
Subliminal spatial cues capture attention and strengthen between-object link.
Chou, Wei-Lun; Yeh, Su-Ling
2011-12-01
According to the spreading hypothesis of object-based attention, a subliminal cue that can successfully capture attention to a location within an object should also cause attention to spread throughout the whole cued object and lead to the same-object advantage. Instead, we propose that a subliminal cue favors shifts of attention between objects and strengthens the between-object link, which is coded primarily within the dorsal pathway that governs the visual guidance of action. By adopting the two-rectangle method and using an effective subliminal cue to compare with the classic suprathreshold cue, we found a different result pattern with suprathreshold cues than with subliminal cues. The suprathreshold cue replicated the conventional location and object effects, whereas a subliminal cue led to a different-object advantage with a facilitatory location effect and a same-object advantage with an inhibitory location effect. These results support our consciousness-dependent shifting hypothesis but not the spreading hypothesis. Copyright © 2011 Elsevier Inc. All rights reserved.
Shojaedini, Seyed Vahab; Heydari, Masoud
2014-10-01
Shape and movement features of sperms are important parameters for infertility study and treatment. In this article, a new method is introduced for characterization sperms in microscopic videos. In this method, first a hypothesis framework is defined to distinguish sperms from other particles in captured video. Then decision about each hypothesis is done in following steps: Selecting some primary regions as candidates for sperms by watershed-based segmentation, pruning of some false candidates during successive frames using graph theory concept and finally confirming correct sperms by using their movement trajectories. Performance of the proposed method is evaluated on real captured images belongs to semen with high density of sperms. The obtained results show the proposed method may detect 97% of sperms in presence of 5% false detections and track 91% of moving sperms. Furthermore, it can be shown that better characterization of sperms in proposed algorithm doesn't lead to extracting more false sperms compared to some present approaches.
Hollingworth, Andrew; Hwang, Seongmin
2013-10-19
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.
Liao, Hsin-I; Yeh, Su-Ling
2013-11-01
Attentional orienting can be involuntarily directed to task-irrelevant stimuli, but it remains unsolved whether such attentional capture is contingent on top-down settings or could be purely stimulus-driven. We propose that attentional capture depends on the stimulus property because transient and static features are processed differently; thus, they might be modulated differently by top-down controls. To test this hybrid account, we adopted a spatial cuing paradigm in which a noninformative onset or color cue preceded an onset or color target with various stimulus onset asynchronies (SOAs). Results showed that the onset cue captured attention regardless of target type at short-but not long-SOAs. In contrast, the color cue captured attention at short and long SOAs, but only with a color target. The overall pattern of results corroborates our hypothesis, suggesting that different mechanisms are at work for stimulus-driven capture (by onset) and contingent capture (by color). Stimulus-driven capture elicits reflexive involuntary orienting, and contingent capture elicits voluntary feature-based enhancement.
Ao, Zheng; Parasido, Erika; Rawal, Siddarth; Williams, Anthony; Schlegel, Richard; Liu, Stephen; Albanese, Chris; Cote, Richard J.; Agarwal, Ashutosh; Datar, Ram H.
2015-01-01
Stimulus responsive release of Circulating Tumor Cells (CTCs), with high recovery rates from their capture platform, is highly desirable for off-chip analyses. Here, we present a temperature responsive polymer coating method to achieve both release as well as culture of viable CTCs captured from patient blood samples. PMID:26426331
Effects of sampling conditions on DNA-based estimates of American black bear abundance
Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.
2013-01-01
DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability for the larger of 2 mixture proportions of the population (i.e., pA or pB, depending on the value of π) was most important for predicting accuracy and precision, whereas capture probabilities of both mixture proportions (pA and pB) were important to explain variation in coverage. Based on sampling conditions similar to parameter estimates from the empirical dataset (pA = 0.30, pB = 0.05, N = 250, π = 0.15, and k = 10), predicted accuracy and precision were low (60% and 53%, respectively), whereas coverage was high (94%). Increasing pB, the capture probability for the predominate but most difficult to capture proportion of the population, was most effective to improve accuracy under those conditions. However, manipulation of other parameters may be more effective under different conditions. In general, the probabilities of obtaining accurate and precise estimates were best when p≥ 0.2. Our regression models can be used by managers to evaluate specific sampling scenarios and guide development of sampling frameworks or to assess reliability of DNA-based capture-mark-recapture studies.
Evaluating abundance estimate precision and the assumptions of a count-based index for small mammals
Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.
2009-01-01
Conservation and management of small mammals requires reliable knowledge of population size. We investigated precision of markrecapture and removal abundance estimates generated from live-trapping and snap-trapping data collected at sites on Guam (n 7), Rota (n 4), Saipan (n 5), and Tinian (n 3), in the Mariana Islands. We also evaluated a common index, captures per unit effort (CPUE), as a predictor of abundance. In addition, we evaluated cost and time associated with implementing live-trapping and snap-trapping and compared species-specific capture rates of selected live- and snap-traps. For all species, markrecapture estimates were consistently more precise than removal estimates based on coefficients of variation and 95 confidence intervals. The predictive utility of CPUE was poor but improved with increasing sampling duration. Nonetheless, modeling of sampling data revealed that underlying assumptions critical to application of an index of abundance, such as constant capture probability across space, time, and individuals, were not met. Although snap-trapping was cheaper and faster than live-trapping, the time difference was negligible when site preparation time was considered. Rattus diardii spp. captures were greatest in Haguruma live-traps (Standard Trading Co., Honolulu, HI) and Victor snap-traps (Woodstream Corporation, Lititz, PA), whereas Suncus murinus and Mus musculus captures were greatest in Sherman live-traps (H. B. Sherman Traps, Inc., Tallahassee, FL) and Museum Special snap-traps (Woodstream Corporation). Although snap-trapping and CPUE may have utility after validation against more rigorous methods, validation should occur across the full range of study conditions. Resources required for this level of validation would likely be better allocated towards implementing rigorous and robust methods.
Investigating Heritage Language and Culture Links: An Indo-Canadian Hindu Perspective
ERIC Educational Resources Information Center
Kumar, Nootan; Trofimovich, Pavel; Gatbonton, Elizabeth
2008-01-01
Although it is commonly believed that language and culture are inexorably linked, the precise nature of this relationship remains elusive. This study investigated the hypothesis that a loss in language signals a loss in culture if language is considered a central value. This hypothesis was investigated by rating the Hindi and English proficiency…
Di Girolamo, Francesco; Righetti, Pier Giorgio; Soste, Martin; Feng, Yuehan; Picotti, Paola
2013-08-26
Systems biology studies require the capability to quantify with high precision proteins spanning a broad range of abundances across multiple samples. However, the broad range of protein expression in cells often precludes the detection of low-abundance proteins. Different sample processing techniques can be applied to increase proteome coverage. Among these, combinatorial (hexa)peptide ligand libraries (CPLLs) bound to solid matrices have been used to specifically capture and detect low-abundance proteins in complex samples. To assess whether CPLL capture can be applied in systems biology studies involving the precise quantitation of proteins across a multitude of samples, we evaluated its performance across the whole range of protein abundances in Saccharomyces cerevisiae. We used selected reaction monitoring assays for a set of target proteins covering a broad abundance range to quantitatively evaluate the precision of the approach and its capability to detect low-abundance proteins. Replicated CPLL-isolates showed an average variability of ~10% in the amount of the isolated proteins. The high reproducibility of the technique was not dependent on the abundance of the protein or the amount of beads used for the capture. However, the protein-to-bead ratio affected the enrichment of specific proteins. We did not observe a normalization effect of CPLL beads on protein abundances. However, CPLLs enriched for and depleted specific sets of proteins and thus changed the abundances of proteins from a whole proteome extract. This allowed the identification of ~400 proteins otherwise undetected in an untreated sample, under the experimental conditions used. CPLL capture is thus a useful tool to increase protein identifications in proteomic experiments, but it should be coupled to the analysis of untreated samples, to maximize proteome coverage. Our data also confirms that CPLL capture is reproducible and can be confidently used in quantitative proteomic experiments. Combinatorial hexapeptide ligand libraries (CPLLs) bound to solid matrices have been proposed to specifically capture and detect low-abundance proteins in complex samples. To assess whether the CPLL capture can be confidently applied in systems biology studies involving the precise quantitation of proteins across a broad range of abundances and a multitude of samples, we evaluated its reproducibility and performance features. Using selected reaction monitoring assays for proteins covering the whole range of abundances we show that the technique is reproducible and compatible with quantitative proteomic studies. However, the protein-to-bead ratio affects the enrichment of specific proteins and CPLLs depleted specific sets of proteins from a whole proteome extract. Our results suggest that CPLL-based analyses should be coupled to the analysis of untreated samples, to maximize proteome coverage. Overall, our data confirms the applicability of CPLLs in systems biology research and guides the correct use of this technique. Copyright © 2013 Elsevier B.V. All rights reserved.
Fuchs, Isabella; Theeuwes, Jan; Ansorge, Ulrich
2013-08-01
In the present study, we tested whether subliminal abrupt-onset cues capture attention in a bottom-up or top-down controlled manner. For our tests, we varied the searched-for target-contrast polarity (i.e., dark or light targets against a gray background) over four experiments. In line with the bottom-up hypothesis, our results indicate that subliminal-onset cues capture attention independently of the searched-for target-contrast polarity (Experiment 1), and this effect is not stronger for targets that matched the searched-for target-contrast polarity (Experiment 2). In fact, even to-be-ignored cues associated with a no-go response captured attention in a salience-driven way (Experiment 3). For supraliminal cues, we found attentional capture only by cues with a matching contrast polarity, reflecting contingent capture (Experiment 4). The results point toward a specific role of subliminal abrupt onsets for attentional capture. 2013 APA, all rights reserved
Mass Measurements beyond the Major r-Process Waiting Point {sup 80}Zn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baruah, S.; Herlert, A.; Schweikhard, L.
2008-12-31
High-precision mass measurements on neutron-rich zinc isotopes {sup 71m,72-81}Zn have been performed with the Penning trap mass spectrometer ISOLTRAP. For the first time, the mass of {sup 81}Zn has been experimentally determined. This makes {sup 80}Zn the first of the few major waiting points along the path of the astrophysical rapid neutron-capture process where neutron-separation energy and neutron-capture Q-value are determined experimentally. The astrophysical conditions required for this waiting point and its associated abundance signatures to occur in r-process models can now be mapped precisely. The measurements also confirm the robustness of the N=50 shell closure for Z=30.
Defining disease with laser precision: laser capture microdissection in gastroenterology.
Blatt, Richard; Srinivasan, Shanthi
2008-08-01
Laser capture microdissection (LCM) is an efficient and precise method for obtaining pure cell populations or specific cells of interest from a given tissue sample. LCM has been applied to animal and human gastroenterology research in analyzing the protein, DNA, and RNA from all organs of the gastrointestinal system. There are numerous potential applications for this technology in gastroenterology research, including malignancies of the esophagus, stomach, colon, biliary tract, and liver. This technology can also be used to study gastrointestinal infections, inflammatory bowel disease, pancreatitis, motility, malabsorption, and radiation enteropathy. LCM has multiple advantages when compared with conventional methods of microdissection, and this technology can be exploited to identify precursors to disease, diagnostic biomarkers, and therapeutic interventions.
Chen, Jiyun; Xu, Xiaomin; Huang, Zhimei; Luo, Yuan; Tang, Lijuan; Jiang, Jian-Hui
2018-01-02
A novel dNAD platform (BEAMing LAMP) by combining emulsion micro-reactors, single-molecule magnetic capture and on-bead loop-mediated isothermal amplification has been developed for DNA detection, which enables absolute and high-precision quantification of a target with a detection limit of 300 copies.
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
Yawning, acute stressors, and arousal reduction in Nazca booby adults and nestlings.
Liang, Amy C; Grace, Jacquelyn K; Tompkins, Emily M; Anderson, David J
2015-03-01
Yawning is a familiar and phylogenetically widespread phenomenon, but no consensus exists regarding its functional significance. We tested the hypothesis that yawning communicates to others a transition from a state of physiological and/or psychological arousal (for example, due to action of a stressor) to a more relaxed state. This arousal reduction hypothesis predicts little yawning during arousal and more yawning (above baseline) during and after down-regulation of arousal. Experimental capture-restraint tests with wild adult Nazca boobies (Sula granti), a seabird, increased yawning frequency after release from restraint, but yawning was almost absent during tests. Natural maltreatment by non-parental adults also increased yawning by nestlings, but only after the maltreatment ended and the adult left. CORT (corticosterone) was a logical a priori element of the stress response affecting the stressor-yawning relationship under the arousal reduction hypothesis, and cannot be excluded as such for adults in capture-restraint tests but is apparently unimportant for nestlings being maltreated by adults. The arousal reduction hypothesis unites formerly disparate results on yawning: its socially contagious nature in some taxa, its clear pharmacological connection to the stress response, and its temporal linkage to transitions in arousal between consciousness and sleep. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Berryhill, Marian E.; Chein, Jason; Olson, Ingrid R.
2011-01-01
Portions of the posterior parietal cortex (PPC) play a role in working memory (WM) yet the precise mechanistic function of this region remains poorly understood. The "pure storage" hypothesis proposes that this region functions as a short-lived modality-specific memory store. Alternatively, the "internal attention" hypothesis proposes that the PPC…
Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory
Hollingworth, Andrew; Beck, Valerie M.
2016-01-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681
Innovation, productivity, and pricing: Capturing value from precision medicine technology in Canada.
Emery, J C Herbert; Zwicker, Jennifer D
2017-07-01
For new technology and innovation such as precision medicine to become part of the solution for the fiscal sustainability of Canadian Medicare, decision-makers need to change how services are priced rather than trying to restrain emerging technologies like precision medicine for short-term cost savings. If provincial public payers shift their thinking to be public purchasers, value considerations would direct reform of the reimbursement system to have prices that adjust with technologically driven productivity gains. This strategic shift in thinking is necessary if Canadians are to benefit from the promised benefits of innovations like precision medicine.
Role of parietal regions in episodic memory retrieval: the dual attentional processes hypothesis.
Cabeza, Roberto
2008-01-01
Although parietal cortex is frequently activated during episodic memory retrieval, damage to this region does not markedly impair episodic memory. To account for these and other findings, a new dual attentional processes (DAP) hypothesis is proposed. According to this hypothesis, dorsal parietal cortex (DPC) contributes top-down attentional processes guided by retrieval goals, whereas ventral parietal cortex (VPC) contributes bottom-up attentional processes captured by the retrieval output. Consistent with this hypothesis, DPC activity increases with retrieval effort whereas VPC activity increases with confidence in old and new responses. The DAP hypothesis can also account for the overlap of parietal activations across different cognitive domains and for opposing effects of parietal activity on encoding vs. retrieval. Finally, the DAP hypothesis explains why VPC lesions yield a memory neglect syndrome: a deficit in spontaneously reporting relevant memory details but not in accessing the same details when guided by specific questions.
Halstead, Brian J.; Wylie, Glenn D.; Casazza, Michael L.
2013-01-01
Increasing detection and capture probabilities of rare or elusive herpetofauna of conservation concern is important to inform the scientific basis for their management and recovery. The Giant Gartersnake (Thamnophis gigas) is an example of a secretive, wary, and generally difficult-to-sample species about which little is known regarding its patterns of occurrence and demography. We therefore evaluated modifications to existing traps to increase the detection and capture probabilities of the Giant Gartersnake to improve the precision with which occurrence, abundance, survival, and other demographic parameters are estimated. We found that adding a one-way valve constructed of cable ties to the small funnel opening of traps and adding hardware cloth extensions to the wide end of funnels increased capture rates of the Giant Gartersnake by 5.55 times (95% credible interval = 2.45–10.51) relative to unmodified traps. The effectiveness of these modifications was insensitive to the aquatic habitat type in which they were deployed. The snout-vent length of the smallest and largest captured snakes did not vary among trap modifications. These trap modifications are expected to increase detection and capture probabilities of the Giant Gartersnake, and show promise for increasing the precision with which demographic parameters can be estimated for this species. We anticipate that the trap modifications found effective in this study will be applicable to a variety of aquatic and semi-aquatic reptiles and amphibians and improve conservation efforts for these species.
Hollingworth, Andrew; Hwang, Seongmin
2013-01-01
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection. PMID:24018723
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Defining disease with laser precision: laser capture microdissection in gastroenterology
Blatt, Richard; Srinivasan, Shanthi
2013-01-01
Laser capture microdissection (LCM) is an efficient and precise method for obtaining pure cell populations or specific cells of interest from a given tissue sample. LCM has been applied to animal and human gastroenterology research in analyzing the protein, DNA and RNA from all organs of the gastrointestinal system. There are numerous potential applications for this technology in gastroenterology research including malignancies of the esophagus, stomach, colon, biliary tract and liver. This technology can also be used to study gastrointestinal infections, inflammatory bowel disease, pancreatitis, motility, malabsorption and radiation enteropathy. LCM has multiple advantages when compared to conventional methods of microdissection, and this technology can be exploited to identify precursors to disease, diagnostic biomarkers, and therapeutic interventions. PMID:18619446
Assessing tiger population dynamics using photographic capture-recapture sampling
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Hines, J.E.
2006-01-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, ?robust design? capture?recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of =K' =Y' 0.10 ? 0.069 (values are estimated mean ? SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 ? 0.051, and the estimated probability that a newly caught animal was a transient was = 0.18 ? 0.11. During the period when the sampled area was of constant size, the estimated population size Nt varied from 17 ? 1.7 to 31 ? 2.1 tigers, with a geometric mean rate of annual population change estimated as = 1.03 ? 0.020, representing a 3% annual increase. The estimated recruitment of new animals, Bt, varied from 0 ? 3.0 to 14 ? 2.9 tigers. Population density estimates, D, ranged from 7.33 ? 0.8 tigers/100 km2 to 21.73 ? 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Assessing tiger population dynamics using photographic capture-recapture sampling.
Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Hines, James E
2006-11-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, "robust design" capture-recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of gamma" = gamma' = 0.10 +/- 0.069 (values are estimated mean +/- SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 +/- 0.051, and the estimated probability that a newly caught animal was a transient was tau = 0.18 +/- 0.11. During the period when the sampled area was of constant size, the estimated population size N(t) varied from 17 +/- 1.7 to 31 +/- 2.1 tigers, with a geometric mean rate of annual population change estimated as lambda = 1.03 +/- 0.020, representing a 3% annual increase. The estimated recruitment of new animals, B(t), varied from 0 +/- 3.0 to 14 +/- 2.9 tigers. Population density estimates, D, ranged from 7.33 +/- 0.8 tigers/100 km2 to 21.73 +/- 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973
Woodman, Geoffrey F; Luck, Steven J
2007-04-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.
High-sensitivity HLA typing by Saturated Tiling Capture Sequencing (STC-Seq).
Jiao, Yang; Li, Ran; Wu, Chao; Ding, Yibin; Liu, Yanning; Jia, Danmei; Wang, Lifeng; Xu, Xiang; Zhu, Jing; Zheng, Min; Jia, Junling
2018-01-15
Highly polymorphic human leukocyte antigen (HLA) genes are responsible for fine-tuning the adaptive immune system. High-resolution HLA typing is important for the treatment of autoimmune and infectious diseases. Additionally, it is routinely performed for identifying matched donors in transplantation medicine. Although many HLA typing approaches have been developed, the complexity, low-efficiency and high-cost of current HLA-typing assays limit their application in population-based high-throughput HLA typing for donors, which is required for creating large-scale databases for transplantation and precision medicine. Here, we present a cost-efficient Saturated Tiling Capture Sequencing (STC-Seq) approach to capturing 14 HLA class I and II genes. The highly efficient capture (an approximately 23,000-fold enrichment) of these genes allows for simplified allele calling. Tests on five genes (HLA-A/B/C/DRB1/DQB1) from 31 human samples and 351 datasets using STC-Seq showed results that were 98% consistent with the known two sets of digitals (field1 and field2) genotypes. Additionally, STC can capture genomic DNA fragments longer than 3 kb from HLA loci, making the library compatible with the third-generation sequencing. STC-Seq is a highly accurate and cost-efficient method for HLA typing which can be used to facilitate the establishment of population-based HLA databases for the precision and transplantation medicine.
Dong, Haiyan; Han, Longyu; Wang, Jie; Xie, Jingjing; Gao, Yu; Xie, Fangwei; Jia, Lee
2018-05-07
Circulating tumor cells (CTCs) are known as the root cause of cancer metastasis that accounts for 90% of cancer death. Owing to the rarity of blood CTCs and their microenvironmental complexity, the existing biotechnology could not precisely capture and apoptosize CTCs in vivo for cancer metastasis prevention. Here, we designed two double strand circular aptamers aimed to simultaneously target MUC1 and HER2 surface biomarkers on mesenchymal cancer cells. The circular aptamers are composed of a capture arm for binding and seizing CTCs and a circular body for resisting degradation by exonucleases. We conjugated the two circular aptamers onto dendrimer PAMAM G4.5 (dcAp1-G-dcAp2), and the conjugate entity showed both significantly-enhanced biostability in serum for days compared with their linear counterparts and capture specificity in RBC (1:10 8 ) compared with their single circular aptamers. dcAp1-G-dcAp2 apoptosized the targeted cells and inhibited their bioenergetic activities significantly by lowing △Ψm, ATP and lactate productions while increasing ROS production. dcAp1-G-dcAp2 captured CTCs in mice in vivo and in patient blood. This study lays the foundation for developing multiple biostable circular aptamers and conjugating them together to precisely capture and apoptosize mesenchymal CTCs in vivo. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dalai, Tarun K.; Ravizza, Gregory E.; Peucker-Ehrenbrink, B.
2006-01-01
High resolution records (ca. 100 kyr) of Os isotope composition ( 187Os / 188Os) in bulk sediments from two tropical Pacific sites (ODP Sites 1218 and 1219) capture the complete Late Eocene 187Os / 188Os excursion and confirm that the Late Eocene 187Os / 188Os minimum, earlier reported by Ravizza and Peucker-Ehrenbrink [Earth Planet. Sci. Lett. 210 (2003) 151-165], is a global feature. Using the astronomically tuned age models available for these sites, it is suggested that the Late Eocene 187Os / 188Os minimum can be placed at 34.5 ± 0.1 Ma in the marine records. In addition, two other distinct features of the 187Os / 188Os excursion that are correlatable among sections are proposed as chemostratigraphic markers which can serve as age control points with a precision of ca. ± 0.1 Myr. We propose a speculative hypothesis that higher cosmic dust flux in the Late Eocene may have contributed to global cooling and Early Oligocene glaciation (Oi-1) by supplying bio-essential trace elements to the oceans and thereby resulting in higher ocean productivity, enhanced burial of organic carbon and draw down of atmospheric CO 2. To determine if the hypothesis that enhanced cosmic dust flux in the Late Eocene was a cause for the 187Os / 188Os excursion can be tested by using the paired bulk sediment and leachate Os isotope composition; 187Os / 188Os were also measured in sediment leachates. Results of analyses of leachates are inconsistent between the south Atlantic and the Pacific sites, and therefore do not yield a robust test of this hypothesis. Comparison of 187Os / 188Os records with high resolution benthic foraminiferal δ18O records across the Eocene-Oligocene transition suggests that 187Os flux to the oceans decreased during cooling and ice growth leading to the Oi-1 glaciation, whereas subsequent decay of ice-sheets and deglacial weathering drove seawater 187Os / 188Os to higher values. Although the precise timing and magnitude of these changes in weathering fluxes and their effects on the marine 187Os / 188Os records are obscured by recovery from the Late Eocene 187Os / 188Os excursion, evidence of the global influence of glaciation on supply of Os to the ocean is robust as it has now been documented in both Pacific and Atlantic records.
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
A case of malignant hyperthermia captured by an anesthesia information management system.
Maile, Michael D; Patel, Rajesh A; Blum, James M; Tremper, Kevin K
2011-04-01
Many cases of malignant hyperthermia triggered by volatile anesthetic agents have been described. However, to our knowledge, there has not been a report describing the precise changes in physiologic data of a human suffering from this process. Here we describe a case of malignant hyperthermia in which monitoring information was frequently and accurately captured by an anesthesia information management system.
Jappe, Emma Christine; Kringelum, Jens; Trolle, Thomas; Nielsen, Morten
2018-02-15
Peptides that bind to and are presented by MHC class I and class II molecules collectively make up the immunopeptidome. In the context of vaccine development, an understanding of the immunopeptidome is essential, and much effort has been dedicated to its accurate and cost-effective identification. Current state-of-the-art methods mainly comprise in silico tools for predicting MHC binding, which is strongly correlated with peptide immunogenicity. However, only a small proportion of the peptides that bind to MHC molecules are, in fact, immunogenic, and substantial work has been dedicated to uncovering additional determinants of peptide immunogenicity. In this context, and in light of recent advancements in mass spectrometry (MS), the existence of immunological hotspots has been given new life, inciting the hypothesis that hotspots are associated with MHC class I peptide immunogenicity. We here introduce a precise terminology for defining these hotspots and carry out a systematic analysis of MS and in silico predicted hotspots. We find that hotspots defined from MS data are largely captured by peptide binding predictions, enabling their replication in silico. This leads us to conclude that hotspots, to a great degree, are simply a result of promiscuous HLA binding, which disproves the hypothesis that the identification of hotspots provides novel information in the context of immunogenic peptide prediction. Furthermore, our analyses demonstrate that the signal of ligand processing, although present in the MS data, has very low predictive power to discriminate between MS and in silico defined hotspots. © 2018 John Wiley & Sons Ltd.
Bian, Shengtai; Cheng, Yinuo; Shi, Guanya; Liu, Peng; Ye, Xiongying
2017-01-01
Single cell analysis has received increasing attention recently in both academia and clinics, and there is an urgent need for effective upstream cell sample preparation. Two extremely challenging tasks in cell sample preparation—high-efficiency cell enrichment and precise single cell capture—have now entered into an era full of exciting technological advances, which are mostly enabled by microfluidics. In this review, we summarize the category of technologies that provide new solutions and creative insights into the two tasks of cell manipulation, with a focus on the latest development in the recent five years by highlighting the representative works. By doing so, we aim both to outline the framework and to showcase example applications of each task. In most cases for cell enrichment, we take circulating tumor cells (CTCs) as the target cells because of their research and clinical importance in cancer. For single cell capture, we review related technologies for many kinds of target cells because the technologies are supposed to be more universal to all cells rather than CTCs. Most of the mentioned technologies can be used for both cell enrichment and precise single cell capture. Each technology has its own advantages and specific challenges, which provide opportunities for researchers in their own area. Overall, these technologies have shown great promise and now evolve into real clinical applications. PMID:28217240
Mathematical Capture of Human Data for Computer Model Building and Validation
2014-04-03
weapon. The Projectile, the VDE , and the IDE weapons had effects of financial loss for the targeted participant, while the MRAD yielded its own...for LE, Centroid and TE for the baseline and The VDE weapon conditions since p-values exceeded α. All other conditions rejected the null...hypothesis except the LE for VDE weapon. The K-S Statistics were correspondingly lower for the measures that failed to reject the null hypothesis. The CDF
Adams, C G; McGhee, P S; Schenker, J H; Gut, L J; Miller, J R
2017-08-01
This field study of codling moth, Cydia pomonella (L.), response to single versus multiple monitoring traps baited with codlemone demonstrates that precision of a given capture number is alarmingly poor when the population is held constant by releasing moths. Captures as low as zero and as high as 12 males per single trap are to be expected where the catch mode is three. Here, we demonstrate that the frequency of false negatives and overestimated positives for codling moth trapping can be substantially reduced by employing the tactic of line-trapping, where five traps were deployed 4 m apart along a row of apple trees. Codling moth traps spaced closely competed only slightly. Therefore, deploying five traps closely in a line is a sampling technique nearly as good as deploying five traps spaced widely. But line trapping offers a substantial savings in time and therefore cost when servicing aggregated versus distributed traps. As the science of pest management matures by mastering the ability to translate capture numbers into estimates of absolute pest density, it will be important to employ a tactic like line-trapping so as to shrink the troublesome variability associated with capture numbers in single traps that thwarts accurate decisions about if and when to spray. Line-trapping might similarly increase the reliability and utility of density estimates derived from capture numbers in monitoring traps for various pest and beneficial insects. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America.
Memory-based attention capture when multiple items are maintained in visual working memory.
Hollingworth, Andrew; Beck, Valerie M
2016-07-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Entanglement Equilibrium and the Einstein Equation.
Jacobson, Ted
2016-05-20
A link between the semiclassical Einstein equation and a maximal vacuum entanglement hypothesis is established. The hypothesis asserts that entanglement entropy in small geodesic balls is maximized at fixed volume in a locally maximally symmetric vacuum state of geometry and quantum fields. A qualitative argument suggests that the Einstein equation implies the validity of the hypothesis. A more precise argument shows that, for first-order variations of the local vacuum state of conformal quantum fields, the vacuum entanglement is stationary if and only if the Einstein equation holds. For nonconformal fields, the same conclusion follows modulo a conjecture about the variation of entanglement entropy.
Wentura, Dirk; Müller, Philipp; Rothermund, Klaus
2014-06-01
In a valence induction task, one color acquired positive valence by indicating the chance to win money (in the case of fast and correct responses), and a different color acquired negative valence by indicating the danger to lose money (in the case of slow or incorrect responses). In the additional-singleton trials of a visual search task, the task-irrelevant singleton color was either the positive one, the negative one, or one of two neutral colors. We found an additional-singleton effect (i.e., longer RTs with a singleton color than in the no-singleton control condition). This effect was significantly increased for the two valent colors (with no differences between them) relative to the two neutral colors (with no differences between them, either). This result favors the hypothesis that the general relevance of stimuli elicits attentional capture, rather than the negativity bias hypothesis.
A Diagram Editor for Efficient Biomedical Knowledge Capture and Integration
Yu, Bohua; Jakupovic, Elvis; Wilson, Justin; Dai, Manhong; Xuan, Weijian; Mirel, Barbara; Athey, Brian; Watson, Stanley; Meng, Fan
2008-01-01
Understanding the molecular mechanisms underlying complex disorders requires the integration of data and knowledge from different sources including free text literature and various biomedical databases. To facilitate this process, we created the Biomedical Concept Diagram Editor (BCDE) to help researchers distill knowledge from data and literature and aid the process of hypothesis development. A key feature of BCDE is the ability to capture information with a simple drag-and-drop. This is a vast improvement over manual methods of knowledge and data recording and greatly increases the efficiency of the biomedical researcher. BCDE also provides a unique concept matching function to enforce consistent terminology, which enables conceptual relationships deposited by different researchers in the BCDE database to be mined and integrated for intelligible and useful results. We hope BCDE will promote the sharing and integration of knowledge from different researchers for effective hypothesis development. PMID:21347131
Furtado-Junior, I; Abrunhosa, F A; Holanda, F C A F; Tavares, M C S
2016-06-01
Fishing selectivity of the mangrove crab Ucides cordatus in the north coast of Brazil can be defined as the fisherman's ability to capture and select individuals from a certain size or sex (or a combination of these factors) which suggests an empirical selectivity. Considering this hypothesis, we calculated the selectivity curves for males and females crabs using the logit function of the logistic model in the formulation. The Bayesian inference consisted of obtaining the posterior distribution by applying the Markov chain Monte Carlo (MCMC) method to software R using the OpenBUGS, BRugs, and R2WinBUGS libraries. The estimated results of width average carapace selection for males and females compared with previous studies reporting the average width of the carapace of sexual maturity allow us to confirm the hypothesis that most mature individuals do not suffer from fishing pressure; thus, ensuring their sustainability.
Targeting Ballistic Lunar Capture Trajectories Using Periodic Orbits in the Sun-Earth CRTBP
NASA Technical Reports Server (NTRS)
Cooley, D.S.; Griesemer, Paul Ricord; Ocampo, Cesar
2009-01-01
A particular periodic orbit in the Earth-Sun circular restricted three body problem is shown to have the characteristics needed for a ballistic lunar capture transfer. An injection from a circular parking orbit into the periodic orbit serves as an initial guess for a targeting algorithm. By targeting appropriate parameters incrementally in increasingly complicated force models and using precise derivatives calculated from the state transition matrix, a reliable algorithm is produced. Ballistic lunar capture trajectories in restricted four body systems are shown to be able to be produced in a systematic way.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter Alm26
NASA Astrophysics Data System (ADS)
Finlay, P.; Ettenauer, S.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.
2011-01-01
A high-precision half-life measurement for the superallowed β+ emitter Alm26 was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1/2=6346.54±0.46stat±0.60systms, consistent with, but 2.5 times more precise than, the previous world average. The Alm26 half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed β decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for Alm26, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi β-decay studies used to test the conserved vector current hypothesis and determine the Vud element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.
Deep Impact Autonomous Navigation : the trials of targeting the unknown
NASA Technical Reports Server (NTRS)
Kubitschek, Daniel G.; Mastrodemos, Nickolaos; Werner, Robert A.; Kennedy, Brian M.; Synnott, Stephen P.; Null, George W.; Bhaskaran, Shyam; Riedel, Joseph E.; Vaughan, Andrew T.
2006-01-01
On July 4, 2005 at 05:44:34.2 UTC the Impactor Spacecraft (s/c) impacted comet Tempel 1 with a relative speed of 10.3 km/s capturing high-resolution images of the surface of a cometary nucleus just seconds before impact. Meanwhile, the Flyby s/c captured the impact event using both the Medium Resolution Imager (MRI) and the High Resolution Imager (HRI) and tracked the nucleus for the entire 800 sec period between impact and shield attitude transition. The objective of the Impactor s/c was to impact in an illuminated area viewable from the Flyby s/c and capture high-resolution context images of the impact site. This was accomplished by using autonomous navigation (AutoNav) algorithms and precise attitude information from the attitude determination and control subsystem (ADCS). The Flyby s/c had two primary objectives: 1) capture the impact event with the highest temporal resolution possible in order to observe the ejecta plume expansion dynamics; and 2) track the impact site for at least 800 sec to observe the crater formation and capture the highest resolution images possible of the fully developed crater. These two objectives were met by estimating the Flyby s/c trajectory relative to Tempel 1 using the same AutoNav algorithms along with precise attitude information from ADCS and independently selecting the best impact site. This paper describes the AutoNav system, what happened during the encounter with Tempel 1 and what could have happened.
THE MAXIMIUM POWER PRINCIPLE: AN EMPIRICAL INVESTIGATION
The maximum power principle is a potential guide to understanding the patterns and processes of ecosystem development and sustainability. The principle predicts the selective persistence of ecosystem designs that capture a previously untapped energy source. This hypothesis was in...
Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.
2007-09-01
We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.
Precision medicine: what's all the fuss about?
Barker, Richard
2016-01-01
Precision medicine is now recognized globally as a major new era in medicine. It is being driven by advances in genomics and other 'omics' but also by the desire on the part of both health systems and governments to offer more targeted and cost-effective care. However, it faces a number of challenges, from the economics of developing more expensive companion diagnostics to the need to educate patients and the public on the advantages for them. New models of both R&D and care delivery are needed to capture the scientific, clinical and economic benefits of precision medicine.
Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawding, Dan; Hillson, Todd D.
2003-11-15
Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less
In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.
Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert
2016-09-01
Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.
Conference on the Origin of the Moon
NASA Technical Reports Server (NTRS)
1984-01-01
Various topics relating to lunar evolution are discussed. The Moon's ancient orbital history, geophysical and geochemical constraints favoring the capture hypothesis, the site of the lunar core, chemical and petrological constraints, dynamical constraints, and mathematical models are among the topics discussed.
Consequences of electrical conductivity in an orb spider's capture web
NASA Astrophysics Data System (ADS)
Vollrath, Fritz; Edmonds, Donald
2013-12-01
The glue-coated and wet capture spiral of the orb web of the garden cross spider Araneus diadematus is suspended between the dry silk radial and web frame threads. Here, we experimentally demonstrate that the capture spiral is electrically conductive because of necks of liquid connecting the droplets even if the thread is stretched. We examine how this conductivity of the capture spiral may lead to entrapment of charged airborne particles such as pollen, spray droplets and even insects. We further describe and model how the conducting spiral will also locally distort the Earth's ambient electric field. Finally, we examine the hypothesis that such distortion could be used by potential prey to detect the presence of a web but conclude that any effect would probably be too small to allow an insect to take evasive action.
A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications
Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.
2014-01-01
Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Dual-Responsive Self-Assembled Monolayer for Specific Capture and On-Demand Release of Live Cells.
Gao, Xia; Li, Qiang; Wang, Fengchao; Liu, Xuehui; Liu, Dingbin
2018-06-22
We report a dual-responsive self-assembled monolayer (SAM) on a well-defined rough gold substrate for dynamic capture and release of live cells. By incorporating 5'-triphosphate (ATP) aptamer into a SAM, we can accurately isolate specific cell types and subsequently release captured cells at either population or desired-group (or even single-cell) levels. On one hand, the whole SAMs can be disassembled through addition of ATP solution, leading to the entire release of the captured cells from the supported substrate. On the other hand, desired cells can be selectively released by using near-infrared light (NIR) irradiation, with relatively high spatial and temporal precision. The proposed dual-responsive cell capture-and-release system is biologically friendly and is reusable with another round of modification, showing great usefulness in cancer diagnosis and molecular analysis.
Impact detections of temporarily captured natural satellites
NASA Astrophysics Data System (ADS)
Clark, David; Spurný, Pavel; Wiegert, Paul; Brown, Peter G.; Borovicha, Jiri; Tagliaferri, Ed; Shrbeny, Lukas
2016-10-01
Temporarily Captured Orbiters (TCOs) are Near-Earth Objects (NEOs) which make a few orbits of Earth before returning to heliocentric orbits. Only one TCO has been observed to date, 2006 RH120, captured by Earth for one year before escaping. Detailed modeling predicts capture should occur from the NEO population predominantly through the Sun-Earth L1 and L2 points, with 1% of TCOs impacting Earth and approximately 0.1% of meteoroids being TCOs. Although thousands of meteoroid orbits have been measured, none until now have conclusively exhibited TCO behaviour, largely due to difficulties in measuring initial meteoroid speed with sufficient precision. We report on a precise meteor observation of January 13, 2014 by a new generation of all-sky fireball digital camera systems operated in the Czech Republic as part of the European Fireball Network, providing the lowest natural object entry speed observed in decades long monitoring by networks world-wide. Modeling atmospheric deceleration and fragmentation yields an initial mass of ~5 kg and diameter of 15 cm, with a maximum Earth-relative velocity just over 11.0 km/s. Spectral observations prove its natural origin. Back-integration across observational uncertainties yields a 92 - 98% probability of TCO behaviour, with close lunar dynamical interaction. The capture duration varies across observational uncertainties from 48 days to 5+ years. We also report on two low-speed impacts recorded by US Government sensors, and we examine Prairie Network event PN39078 from 1965 having an extremely low entry speed of 10.9 km/s. In these cases uncertainties in measurement and origin make TCO designation uncertain.
Development of Holmium-163 electron-capture spectroscopy with transition-edge sensors
Croce, Mark Philip; Rabin, Michael W.; Mocko, Veronika; ...
2016-08-01
Calorimetric decay energy spectroscopy of electron-capture-decaying isotopes is a promising method to achieve the sensitivity required for electron neutrino mass measurement. The very low total nuclear decay energy (Q EC < 3 keV) and short half-life (4570 years) of 163Ho make it attractive for high-precision electron-capture spectroscopy (ECS) near the kinematic endpoint, where the neutrino momentum goes to zero. In the ECS approach, an electron-capture-decaying isotope is embedded inside a microcalorimeter designed to capture and measure the energy of all the decay radiation except that of the escaping neutrino. We have developed a complete process for proton irradiation-based isotope production,more » isolation, and purification of 163Ho. We have developed transition-edge sensors for this measurement and methods for incorporating 163Ho into high-resolution microcalorimeters, and have measured the electron-capture spectrum of 163Ho. Finally, we present our work in these areas and discuss the measured spectrum and its comparison to current theory.« less
Development of Holmium-163 electron-capture spectroscopy with transition-edge sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croce, Mark Philip; Rabin, Michael W.; Mocko, Veronika
Calorimetric decay energy spectroscopy of electron-capture-decaying isotopes is a promising method to achieve the sensitivity required for electron neutrino mass measurement. The very low total nuclear decay energy (Q EC < 3 keV) and short half-life (4570 years) of 163Ho make it attractive for high-precision electron-capture spectroscopy (ECS) near the kinematic endpoint, where the neutrino momentum goes to zero. In the ECS approach, an electron-capture-decaying isotope is embedded inside a microcalorimeter designed to capture and measure the energy of all the decay radiation except that of the escaping neutrino. We have developed a complete process for proton irradiation-based isotope production,more » isolation, and purification of 163Ho. We have developed transition-edge sensors for this measurement and methods for incorporating 163Ho into high-resolution microcalorimeters, and have measured the electron-capture spectrum of 163Ho. Finally, we present our work in these areas and discuss the measured spectrum and its comparison to current theory.« less
NASA Astrophysics Data System (ADS)
Xiao, Yan; Li, Yaoyu; Rotea, Mario A.
2016-09-01
The primary objective in below rated wind speed (Region 2) is to maximize the turbine's energy capture. Due to uncertainty, variability of turbine characteristics and lack of inexpensive but precise wind measurements, model-free control strategies that do not use wind measurements such as Extremum Seeking Control (ESC) have received significant attention. Based on a dither-demodulation scheme, ESC can maximize the wind power capture in real time despite uncertainty, variabilities and lack of accurate wind measurements. The existing work on ESC based wind turbine control focuses on power capture only. In this paper, a multi-objective extremum seeking control strategy is proposed to achieve nearly optimum wind energy capture while decreasing structural fatigue loads. The performance index of the ESC combines the rotor power and penalty terms of the standard deviations of selected fatigue load variables. Simulation studies of the proposed multi-objective ESC demonstrate that the damage-equivalent loads of tower and/or blade loads can be reduced with slight compromise in energy capture.
Jaffe, D. E.
2014-10-03
A new measurement of the θ 13 mixing angle has been obtained at the Daya Bay Reactor Neutrino Experiment via the detection of inverse beta decays tagged by neutron capture on hydrogen. The antineutrino events for hydrogen capture are distinct from those for gadolinium capture with largely different systematic uncertainties, allowing a determination independent of the gadolinium-capture result and an improvement on the precision of the θ 13 measurement. With a 217-day antineutrino data set obtained with six antineutrino detectors and from six 2.9 GW th reactors, the rate deficit observed at the far hall is interpreted as sin 22θmore » 13=0.083±0.018 in the three-flavor oscillation model. When combined with the gadolinium-capture result from Daya Bay, we obtain sin 22θ 13=0.089±0.008 as the final result for the six-antineutrino-detector configuration of the Daya Bay experiment.« less
Adaptation or constraint? Reference-dependent scatter in honey bee dances
Visscher, P. Kirk
2010-01-01
The waggle dance of the honey bee is used to recruit nest mates to a resource. Dancer bees, however, may indicate many directions within a single dance bout; we show that this scatter in honey bee dances is strongly dependent on the sensory modality used to determine a reference angle in the dance. Dances with a visual reference are more precise than those with a gravity reference. This finding undermines the idea that scatter is introduced into dances, which the bees could perform more precisely, in order to spread recruits out over resource patches. It also calls into question reported interspecific differences that had been interpreted as adaptations of the dance to different habitats. Our results support a non-adaptive hypothesis: that dance scatter results from sensory and performance constraints, rather than modulation of the scatter by the dancing bee. However, an alternative adaptive hypothesis cannot be ruled out. PMID:20585382
Adaptation or constraint? Reference-dependent scatter in honey bee dances.
Tanner, David A; Visscher, P Kirk
2010-07-01
The waggle dance of the honey bee is used to recruit nest mates to a resource. Dancer bees, however, may indicate many directions within a single dance bout; we show that this scatter in honey bee dances is strongly dependent on the sensory modality used to determine a reference angle in the dance. Dances with a visual reference are more precise than those with a gravity reference. This finding undermines the idea that scatter is introduced into dances, which the bees could perform more precisely, in order to spread recruits out over resource patches. It also calls into question reported interspecific differences that had been interpreted as adaptations of the dance to different habitats. Our results support a non-adaptive hypothesis: that dance scatter results from sensory and performance constraints, rather than modulation of the scatter by the dancing bee. However, an alternative adaptive hypothesis cannot be ruled out.
Kawaguchi, Yuko; Yokobori, Shin-Ichi; Hashimoto, Hirofumi; Yano, Hajime; Tabata, Makoto; Kawai, Hideyuki; Yamagishi, Akihiko
2016-05-01
The Tanpopo mission will address fundamental questions on the origin of terrestrial life. The main goal is to test the panspermia hypothesis. Panspermia is a long-standing hypothesis suggesting the interplanetary transport of microbes. Another goal is to test the possible origin of organic compounds carried from space by micrometeorites before the terrestrial origin of life. To investigate the panspermia hypothesis and the possible space origin of organic compounds, we performed space experiments at the Exposed Facility (EF) of the Japanese Experiment Module (JEM) of the International Space Station (ISS). The mission was named Tanpopo, which in Japanese means dandelion. We capture any orbiting microparticles, such as micrometeorites, space debris, and terrestrial particles carrying microbes as bioaerosols, by using blocks of silica aerogel. We also test the survival of microbial species and organic compounds in the space environment for up to 3 years. The goal of this review is to introduce an overview of the Tanpopo mission with particular emphasis on the investigation of the interplanetary transfer of microbes. The Exposed Experiment Handrail Attachment Mechanism with aluminum Capture Panels (CPs) and Exposure Panels (EPs) was exposed on the EF-JEM on May 26, 2015. The first CPs and EPs will be returned to the ground in mid-2016. Possible escape of terrestrial microbes from Earth to space will be evaluated by investigating the upper limit of terrestrial microbes by the capture experiment. Possible mechanisms for transfer of microbes over the stratosphere and an investigation of the effect of the microbial cell-aggregate size on survivability in space will also be discussed. Panspermia-Astrobiology-Low-Earth orbit. Astrobiology 16, 363-376.
[Estimating survival of thrushes: modeling capture-recapture probabilities].
Burskiî, O V
2011-01-01
The stochastic modeling technique serves as a way to correctly separate "return rate" of marked animals into survival rate (phi) and capture probability (p). The method can readily be used with the program MARK freely distributed through Internet (Cooch, White, 2009). Input data for the program consist of "capture histories" of marked animals--strings of units and zeros indicating presence or absence of the individual among captures (or sightings) along the set of consequent recapture occasions (e.g., years). Probability of any history is a product of binomial probabilities phi, p or their complements (1 - phi) and (1 - p) for each year of observation over the individual. Assigning certain values to parameters phi and p, one can predict the composition of all individual histories in the sample and assess the likelihood of the prediction. The survival parameters for different occasions and cohorts of individuals can be set either equal or different, as well as recapture parameters can be set in different ways. There is a possibility to constraint the parameters, according to the hypothesis being tested, in the form of a specific model. Within the specified constraints, the program searches for parameter values that describe the observed composition of histories with the maximum likelihood. It computes the parameter estimates along with confidence limits and the overall model likelihood. There is a set of tools for testing the model goodness-of-fit under assumption of equality of survival rates among individuals and independence of their fates. Other tools offer a proper selection among a possible variety of models, providing the best parity between details and precision in describing reality. The method was applied to 20-yr recapture and resighting data series on 4 thrush species (genera Turdus, Zoothera) breeding in the Yenisei River floodplain within the middle taiga subzone. The capture probabilities were quite independent of observational efforts fluctuations while differing significantly between the species and sexes. The estimates of adult survival rate, obtained for the Siberian migratory populations, were lower than those for sedentary populations from both the tropics and intermediate latitudes with marine climate (data by Ricklefs, 1997). Two factors, the average temperature influencing birds during their annual movements, and climatic seasonality (temperature difference between summer and winter) in the breeding area, fit the latitudinal pattern of survival most closely (R2 = 0.90). Final survival of migrants reflects an adaptive life history compromise for use of superabundant resources in breeding area at the cost of avoidance of severe winter conditions.
Hippocampal Replay Captures the Unique Topological Structure of a Novel Environment
Wu, Xiaojing
2014-01-01
Hippocampal place-cell replay has been proposed as a fundamental mechanism of learning and memory, which might support navigational learning and planning. An important hypothesis of relevance to these proposed functions is that the information encoded in replay should reflect the topological structure of experienced environments; that is, which places in the environment are connected with which others. Here we report several attributes of replay observed in rats exploring a novel forked environment that support the hypothesis. First, we observed that overlapping replays depicting divergent trajectories through the fork recruited the same population of cells with the same firing rates to represent the common portion of the trajectories. Second, replay tended to be directional and to flip the represented direction at the fork. Third, replay-associated sharp-wave–ripple events in the local field potential exhibited substructure that mapped onto the maze topology. Thus, the spatial complexity of our recording environment was accurately captured by replay: the underlying neuronal activities reflected the bifurcating shape, and both directionality and associated ripple structure reflected the segmentation of the maze. Finally, we observed that replays occurred rapidly after small numbers of experiences. Our results suggest that hippocampal replay captures learned information about environmental topology to support a role in navigation. PMID:24806672
2015-10-01
a promising target for precision therapy , but the mechanisms leading to hypermutation, optimal methods to measure hypermutation status in the ...1 was largely completed in Year 1 and is summarized below. We published a manuscript in Nature Communications based on the work accomplished in Aim...multiplexing 24 samples per lane on a HiSeq2500. The BROCA assay uses the Agilent SureSelect enrichment system to capture the coding exons and
Precision disablement aiming system
Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott
2016-02-16
A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
Empathy and universal values explicated by the empathy-altruism hypothesis.
Persson, Björn N; Kajonius, Petri J
2016-01-01
Research reports that empathy is on the decline in present-day society, together with an increasing trend in self-enhancing values. Based on the empathy-altruism hypothesis, we investigated whether these constructs are interlinked by analyzing the relationships between emotional and cognitive empathy and 10 universal values. In the first study, using a middle-aged U.S. sample, the results showed that empathy was strongly and positively related to altruistic values and negatively to self-enhancing values in a pattern that aligned with the empathy-altruism hypothesis. In a second confirmation study, these findings were replicated and extended, while also controlling for the Big Five personality traits, to discount that empathy is only captured by basic personality. Only emotional empathy, not cognitive empathy, accounted for up to 18% additional variance in altruistic values, which further confirmed the emphasis on feelings, as postulated by the empathy-altruism hypothesis.
Attack of the Killer Fungus: A Hypothesis-Driven Lab Module †
Sato, Brian K.
2013-01-01
Discovery-driven experiments in undergraduate laboratory courses have been shown to increase student learning and critical thinking abilities. To this end, a lab module involving worm capture by a nematophagous fungus was developed. The goals of this module are to enhance scientific understanding of the regulation of worm capture by soil-dwelling fungi and for students to attain a set of established learning goals, including the ability to develop a testable hypothesis and search for primary literature for data analysis, among others. Students in a ten-week majors lab course completed the lab module and generated novel data as well as data that agrees with the published literature. In addition, learning gains were achieved as seen through a pre-module and post-module test, student self-assessment, class exam, and lab report. Overall, this lab module enables students to become active participants in the scientific method while contributing to the understanding of an ecologically relevant model organism. PMID:24358387
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Multianode Photomultiplier Tube Alignment for the MINERvA Experiment at Fermilab
NASA Astrophysics Data System (ADS)
Bruno, Jorge
2006-10-01
The MINERvA experiment (Main INjector ExpeRiment vA) at FNAL will study the neutrino-nucleon and neutrino-nucleus interaction. The light collection from the detector will be done via optic fibers using Hamamatsu H8804 64-channel photomultiplier tubes (PMT). Each PMT channel needs to be precisely aligned with the corresponding optic fiber. The MINERvA PMT optical boxes contain precision machined optic ``cookies'' which capture the 8x8 array of optic fibers. Each PMT-cookie pair needs to be aligned as precisely as possible. This contribution will describe the alignment setup and procedure implemented at James Madison University.
Seq-Well: portable, low-cost RNA sequencing of single cells at high throughput.
Gierahn, Todd M; Wadsworth, Marc H; Hughes, Travis K; Bryson, Bryan D; Butler, Andrew; Satija, Rahul; Fortune, Sarah; Love, J Christopher; Shalek, Alex K
2017-04-01
Single-cell RNA-seq can precisely resolve cellular states, but applying this method to low-input samples is challenging. Here, we present Seq-Well, a portable, low-cost platform for massively parallel single-cell RNA-seq. Barcoded mRNA capture beads and single cells are sealed in an array of subnanoliter wells using a semipermeable membrane, enabling efficient cell lysis and transcript capture. We use Seq-Well to profile thousands of primary human macrophages exposed to Mycobacterium tuberculosis.
NASA Astrophysics Data System (ADS)
Bracciali, Laura; Najman, Yani; Parrish, Randy; Millar, Ian; Akhter, Syed
2014-05-01
It has been proposed that the rapid exhumation and anomalously young metamorphism of the Namche Barwa eastern Himalayan syntaxis in the Plio-Pleistocene resulted from river capture of the Yarlung Tsangpo by the Brahmaputra (the "tectonic aneurysm" model; e.g. Zeitler et al.GSA Today 2001) . In order to test this hypothesis, the occurrence of river capture, and its timing, must be ascertained. Today, the Yarlung Tsangpo flows east along the Indus-Yarlung suture before taking a 180º turn at the eastern Himalayan syntaxis to flow south across the Himalaya as the Brahmaputra. Whether this river pattern results from river capture, or whether the river is antecedent to orogenesis, is much debated, yet robust constraints on the occurrence of the proposed river capture and an independent time-frame for such an event are lacking. The Yarlung Tsangpo drains the Jurassic-Paleogene Trans-Himalayan arc of the Asian plate north of the suture and the Tethyan Himalaya of the Indian plate to the south of the suture, while the Brahmaputra prior to any capture would have drained the southern Himalayan slopes composed only of Precambrian-Palaeozoic Indian crust, much of which metamorphosed to high grade during the Oligo-Miocene. Hence, the first occurrence of Trans-Himalayan arc detritus which is distinctive of the Yarlung Tsangpo, in the Neogene palaeo-Brahmaputra deposits in the Bengal Basin, Bangladesh, is key to date the river capture. We have applied a multi-disciplinary provenance study to these sediments and identify the earliest occurrence of detritus from the arc in the Early Miocene. Dating the time of river capture has implications both for the timing of uplift of Tibet and models of tectonic-erosion interactions: - Whilst some workers propose an early uplift of the plateau, others propose a later independent uplift event, at least for the east of the plateau, caused by an additional mechanism. This late uplift event has been invoked by previous workers as the cause of the river capture of the Yarlung Tsangpo by the Brahmaputra due to effective lowering of base level. If this cause and effect correlation is correct, this uplift event must have occurred prior to the Early Miocene. - These data allow us to explore the proposed interaction between the Namche Barwa snytaxial evolution and the timing of river capture. Given we have now dated the time of this river capture at ~18 Ma, the modelled coupling between capture and onset of rapid exhumation (dated at Plio-Pleistocene) would need to accommodate a lag time of ~8 Ma for this hypothesis to hold true.
Oculomotor Capture by New and Unannounced Color Singletons during Visual Search.
Retell, James D; Venini, Dustin; Becker, Stefanie I
2015-07-01
The surprise capture hypothesis states that a stimulus will capture attention to the extent that it is preattentively available and deviates from task-expectancies. Interestingly, it has been noted by Horstmann (Psychological Science 13: 499-505. doi: 10.1111/1467-9280.00488, 2002, Human Perception and Performance 31: 1039-1060. doi: 10.1037/00961523.31.5.1039, 2005, Psychological Research, 70, 13-25, 2006) that the time course of capture by such classes of stimuli appears distinct from that of capture by expected stimuli. Specifically, attention shifts to an unexpected stimulus are delayed relative to an expected stimulus (delayed onset account). Across two experiments, we investigated this claim under conditions of unguided (Exp. 1) and guided (Exp. 2) search using eye-movements as the primary index of attentional selection. In both experiments, we found strong evidence of surprise capture for the first presentation of an unannounced color singleton. However, in both experiments the pattern of eye-movements was not consistent with a delayed onset account of attention capture. Rather, we observed costs associated with the unexpected stimulus only once the target had been selected. We propose an interference account of surprise capture to explain our data and argue that this account also can explain existing patterns of data in the literature.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fun with Flukes: The Use of ICT in the Study of Larval Trematode Behaviour.
ERIC Educational Resources Information Center
Rea, J. G.; Irwin, S. W. B.
2001-01-01
Recommends a number of investigations using video-capture and two readily available, non-pathogenic larval Digeneans with contrasting life cycles and behavior. The activities support the hypothesis that parasites exhibit behavior that increases their chances of a host infection. (DDR)
Does interference competition with wolves limit the distribution and abundance of coyotes?
Berger, Kim Murray; Gese, Eric M
2007-11-01
Interference competition with wolves Canis lupus is hypothesized to limit the distribution and abundance of coyotes Canis latrans, and the extirpation of wolves is often invoked to explain the expansion in coyote range throughout much of North America. We used spatial, seasonal and temporal heterogeneity in wolf distribution and abundance to test the hypothesis that interference competition with wolves limits the distribution and abundance of coyotes. From August 2001 to August 2004, we gathered data on cause-specific mortality and survival rates of coyotes captured at wolf-free and wolf-abundant sites in Grand Teton National Park (GTNP), Wyoming, USA, to determine whether mortality due to wolves is sufficient to reduce coyote densities. We examined whether spatial segregation limits the local distribution of coyotes by evaluating home-range overlap between resident coyotes and wolves, and by contrasting dispersal rates of transient coyotes captured in wolf-free and wolf-abundant areas. Finally, we analysed data on population densities of both species at three study areas across the Greater Yellowstone Ecosystem (GYE) to determine whether an inverse relationship exists between coyote and wolf densities. Although coyotes were the numerically dominant predator, across the GYE, densities varied spatially and temporally in accordance with wolf abundance. Mean coyote densities were 33% lower at wolf-abundant sites in GTNP, and densities declined 39% in Yellowstone National Park following wolf reintroduction. A strong negative relationship between coyote and wolf densities (beta = -3.988, P < 0.005, r(2) = 0.54, n = 16), both within and across study sites, supports the hypothesis that competition with wolves limits coyote populations. Overall mortality of coyotes resulting from wolf predation was low, but wolves were responsible for 56% of transient coyote deaths (n = 5). In addition, dispersal rates of transient coyotes captured at wolf-abundant sites were 117% higher than for transients captured in wolf-free areas. Our results support the hypothesis that coyote abundance is limited by competition with wolves, and suggest that differential effects on survival and dispersal rates of transient coyotes are important mechanisms by which wolves reduce coyote densities.
Capture Cross-section Measurement of 241Am(n,γ) at J-PARC/MLF/ANNRI
NASA Astrophysics Data System (ADS)
Harada, H.; Ohta, M.; Kimura, A.; Furutaka, K.; Hirose, K.; Hara, K. Y.; Kin, T.; Kitatani, F.; Koizumi, M.; Nakamura, S.; Oshima, M.; Toh, Y.; Igashira, M.; Katabuchi, T.; Mizumoto, M.; Kino, K.; Kiyanagi, Y.; Fujii, T.; Fukutani, S.; Hori, J.; Takamiya, K.
2014-05-01
The 241Am(n, γ) 242Am cross sections have been measured for neutron energies between 0.01 and 10 eV using the Accurate Neutron-Nucleus Reaction measurement Instrument (ANNRI) installed at the Materials and Life-science experimental Facility (MLF) in J-PARC. ANNRI combines the strongest neutron-pulsed beam and a high energy resolution γ-ray spectrometer, making possible accurate measurements of neutron capture cross sections for highly radioactive samples. From the measured cross section, the Westcott neutron capture factor and strength of the first three resonances in 241Am are deduced. These results with precision less than 0.5 % are compared with those derived from JENDL-4.0.
Improving inferences from fisheries capture-recapture studies through remote detection of PIT tags
Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Shively, Rip S.
2010-01-01
Models for capture-recapture data are commonly used in analyses of the dynamics of fish and wildlife populations, especially for estimating vital parameters such as survival. Capture-recapture methods provide more reliable inferences than other methods commonly used in fisheries studies. However, for rare or elusive fish species, parameter estimation is often hampered by small probabilities of re-encountering tagged fish when encounters are obtained through traditional sampling methods. We present a case study that demonstrates how remote antennas for passive integrated transponder (PIT) tags can increase encounter probabilities and the precision of survival estimates from capture-recapture models. Between 1999 and 2007, trammel nets were used to capture and tag over 8,400 endangered adult Lost River suckers (Deltistes luxatus) during the spawning season in Upper Klamath Lake, Oregon. Despite intensive sampling at relatively discrete spawning areas, encounter probabilities from Cormack-Jolly-Seber models were consistently low (< 0.2) and the precision of apparent annual survival estimates was poor. Beginning in 2005, remote PIT tag antennas were deployed at known spawning locations to increase the probability of re-encountering tagged fish. We compare results based only on physical recaptures with results based on both physical recaptures and remote detections to demonstrate the substantial improvement in estimates of encounter probabilities (approaching 100%) and apparent annual survival provided by the remote detections. The richer encounter histories provided robust inferences about the dynamics of annual survival and have made it possible to explore more realistic models and hypotheses about factors affecting the conservation and recovery of this endangered species. Recent advances in technology related to PIT tags have paved the way for creative implementation of large-scale tagging studies in systems where they were previously considered impracticable.
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2018-03-01
Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder internal rotation, and elbow flexion. However, digital photography was only more precise than goniometry for measurements of elbow flexion. Overall digital photography shows equivalent accuracy to visual estimation and goniometry, but with higher precision than visual estimation. Copyright © 2017. Published by Elsevier B.V.
USDA-ARS?s Scientific Manuscript database
Technical Abstract Amaryllidaceae tribe Hippeastreae constitute a horticulturally valuable group of approximately 180 species of American petaloid monocots, characterized by dysploidy and polyploidy. A recent hypothesis based on ITS and chloroplast sequence data states that Hippeastreae experienced ...
Does oculomotor readiness mediate exogenous capture of visual attention?
MacLean, Gregory H; Klein, Raymond M; Hilchey, Matthew D
2015-10-01
The oculomotor readiness hypothesis makes 2 predictions: Shifts in covert attention are accompanied by preparedness to move one's eyes to the attended region, and preparedness to move one's eyes to a region in space is accompanied by a shift in covert attention to the prepared location. Both predictions have been disconfirmed using an endogenous attention task. In the 2 experiments presented here, the same 2 predictions were tested using an exogenous attention task. It was found that participants experienced covert capture without accompanying oculomotor activation and experienced oculomotor activation without accompanying covert capture. While under everyday conditions the overt and covert orienting systems may be strongly linked, apparently they can nonetheless operate with a high degree of independence from one another. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
On the Value-Dependence of Value-Driven Attentional Capture
Anderson, Brian A.; Halpern, Madeline
2017-01-01
Findings from an increasingly large number of studies have been used to argue that attentional capture can be dependent on the learned value of a stimulus, or value-driven. However, under certain circumstances attention can be biased to select stimuli that previously served as targets, independent of reward history. Value-driven attentional capture, as studied using the training phase-test phase design introduced by Anderson and colleagues, is widely presumed to reflect the combined influence of learned value and selection history. However, the degree to which attentional capture is at all dependent on value learning in this paradigm has recently been questioned. Support for value-dependence can be provided through one of two means: (1) greater attentional capture by prior targets following rewarded training than following unrewarded training, and (2) greater attentional capture by prior targets previously associated with high compared to low value. Using a variant of the original value-driven attentional capture paradigm, Sha and Jiang (2016) failed to find evidence of either, and raised criticisms regarding the adequacy of evidence provided by prior studies using this particular paradigm. To address this disparity, here we provided a stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm. With a sufficiently large sample size, value-dependence was observed based on both criteria, with no evidence of attentional capture without rewards during training. Our findings support the validity of the traditional value-driven attentional capture paradigm in measuring what its name purports to measure. PMID:28176215
Using Big Data Analytics to Advance Precision Radiation Oncology.
McNutt, Todd R; Benedict, Stanley H; Low, Daniel A; Moore, Kevin; Shpitser, Ilya; Jiang, Wei; Lakshminarayanan, Pranav; Cheng, Zhi; Han, Peijin; Hui, Xuan; Nakatsugawa, Minoru; Lee, Junghoon; Moore, Joseph A; Robertson, Scott P; Shah, Veeraj; Taylor, Russ; Quon, Harry; Wong, John; DeWeese, Theodore
2018-06-01
Big clinical data analytics as a primary component of precision medicine is discussed, identifying where these emerging tools fit in the spectrum of genomics and radiomics research. A learning health system (LHS) is conceptualized that uses clinically acquired data with machine learning to advance the initiatives of precision medicine. The LHS is comprehensive and can be used for clinical decision support, discovery, and hypothesis derivation. These developing uses can positively impact the ultimate management and therapeutic course for patients. The conceptual model for each use of clinical data, however, is different, and an overview of the implications is discussed. With advancements in technologies and culture to improve the efficiency, accuracy, and breadth of measurements of the patient condition, the concept of an LHS may be realized in precision radiation therapy. Copyright © 2018 Elsevier Inc. All rights reserved.
Precision measurement of the 238 Pu ( n , γ ) cross section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chyzh, A.; Wu, C. Y.; Kwan, E.
2013-10-14
Here, the neutron-capture cross section for 238Pu was measured by using the detector for advanced neutron-capture experiments (DANCE) array, which is a highly segmented and highly efficient 4π γ-ray calorimeter. The neutron-capture events were recognized by the total γ-ray energy deposited in DANCE, which is equal to the reaction Q value plus the incident neutron energy. The absolute neutron-capture cross section was derived as a function of incident neutron energy from thermal to about 30 keV. The measured cross section for incident neutron energy below 18 eV was performed for the first time by using the direct method and doesmore » not support the most recently adopted changes in endf/b-vii.1 where the neutron-capture cross section was lowered by as much as a factor of ~3 in the neighborhood of 0.3 eV from those evaluated in ENDF/B-VII.0.« less
Current and Future Research at DANCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jandel, M.; Baramsai, B.; Bredeweg, T. A.
2015-05-28
An overview of the current experimental program on measurements of neutron capture and neutron induced fission at the Detector for Advanced Neutron Capture Experiments (DANCE) is presented. Three major projects are currently under way: 1) high precision measurements of neutron capture cross sections on Uranium isotopes, 2) research aimed at studies of the short-lived actinide isomer production in neutron capture on 235U and 3) measurements of correlated data of fission observables. New projects include developments of auxiliary detectors to improve the capability of DANCE. We are building a compact, segmented NEUtron detector Array at DANCE (NEUANCE), which will be installedmore » in the central cavity of the DANCE array. It will thus provide experimental information on prompt fission neutrons in coincidence with the prompt fission gamma-rays measured by 160 BaF 2 crystals of DANCE. Additionally, unique correlated data will be obtained for neutron capture and neutron-induced fission using the DANCE-NEUANCE experimental set up in the future.« less
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
Pathogenesis of Rushton bodies: A novel hypothesis.
Sarode, Gargi S; Sarode, Sachin C; Tupkari, Jagdish V; Deshmukh, Revati; Patil, Shankargouda
2016-08-01
Rushton bodies (RBs) are one of the characteristic features seen in the epithelial lining of odontogenic cysts mainly radicular, dentigerous and odontogenic keratocyst. It has two different histo-morphological appearances; granular and homogeneous. Although widely investigated, the exact pathogenesis and histogenesis of RBs is still an enigma. Many hypotheses were made in the literature but none has explained conceivably the two histo-morphological appearances of RBs and their association with inflammation. In the present paper the various pathogenesis for the formation of RBs proposed till date are discussed along with proposal for a novel hypothesis. The given hypothesis is mainly related to inflammation and its effect on pore size of basement membrane of odontogenic cystic epithelium. It explains RBs association with inflammation as well as existence of two histo-morphological appearances. The proposed hypothesis also justifies the RB's presence inside the lining epithelium of odontogenic cyst despite its hematogenous origin. Future studies are advocated for isolating RBs using laser capture microdissection and subsequent biochemical, histochemical and electron microscopic analysis to substantiate the proposed hypothesis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ellison, L.E.; O'Shea, T.J.; Neubaum, D.J.; Neubaum, M.A.; Pearce, R.D.; Bowen, R.A.
2007-01-01
We compared conventional capture (primarily mist nets and harp traps) and passive integrated transponder (PIT) tagging techniques for estimating capture and survival probabilities of big brown bats (Eptesicus fuscus) roosting in buildings in Fort Collins, Colorado. A total of 987 female adult and juvenile bats were captured and marked by subdermal injection of PIT tags during the summers of 2001-2005 at five maternity colonies in buildings. Openings to roosts were equipped with PIT hoop-style readers, and exit and entry of bats were passively monitored on a daily basis throughout the summers of 2002-2005. PIT readers 'recaptured' adult and juvenile females more often than conventional capture events at each roost. Estimates of annual capture probabilities for all five colonies were on average twice as high when estimated from PIT reader data (P?? = 0.93-1.00) than when derived from conventional techniques (P?? = 0.26-0.66), and as a consequence annual survival estimates were more precisely estimated when using PIT reader encounters. Short-term, daily capture estimates were also higher using PIT readers than conventional captures. We discuss the advantages and limitations of using PIT tags and passive encounters with hoop readers vs. conventional capture techniques for estimating these vital parameters in big brown bats. ?? Museum and Institute of Zoology PAS.
Measuring and correcting wobble in large-scale transmission radiography.
Rogers, Thomas W; Ollier, James; Morton, Edward J; Griffin, Lewis D
2017-01-01
Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.
Wirth, Benedikt Emanuel; Wentura, Dirk
2018-04-01
Dot-probe studies usually find an attentional bias towards threatening stimuli only in anxious participants. Here, we investigated under what conditions such a bias occurs in unselected samples. According to contingent-capture theory, an irrelevant cue only captures attention if it matches an attentional control setting. Therefore, we first tested the hypothesis that an attentional control setting tuned to threat must be activated in (non-anxious) individuals. In Experiment 1, we used a dot-probe task with a manipulation of attentional control settings ('threat' - set vs. control set). Surprisingly, we found an (anxiety-independent) attentional bias to angry faces that was not moderated by attentional control settings. Since we presented two stimuli (i.e., a target and a distractor) on the target screen in Experiment 1 (a necessity to realise the test of contingent capture), but most dot-probe studies only employ a single target, we conducted Experiment 2 to test the hypothesis that attentional bias in the general population is contingent on target competition. Participants performed a dot-probe task, involving presentation of a stand-alone target or a target competing with a distractor. We found an (anxiety-independent) attentional bias towards angry faces in the latter but not the former condition. This suggests that attentional bias towards angry faces in unselected samples is not contingent on attentional control settings but on target competition.
Thege, Fredrik I; Lannin, Timothy B; Saha, Trisha N; Tsai, Shannon; Kochman, Michael L; Hollingsworth, Michael A; Rhim, Andrew D; Kirby, Brian J
2014-05-21
We have developed and optimized a microfluidic device platform for the capture and analysis of circulating pancreatic cells (CPCs) and pancreatic circulating tumor cells (CTCs). Our platform uses parallel anti-EpCAM and cancer-specific mucin 1 (MUC1) immunocapture in a silicon microdevice. Using a combination of anti-EpCAM and anti-MUC1 capture in a single device, we are able to achieve efficient capture while extending immunocapture beyond single marker recognition. We also have detected a known oncogenic KRAS mutation in cells spiked in whole blood using immunocapture, RNA extraction, RT-PCR and Sanger sequencing. To allow for downstream single-cell genetic analysis, intact nuclei were released from captured cells by using targeted membrane lysis. We have developed a staining protocol for clinical samples, including standard CTC markers; DAPI, cytokeratin (CK) and CD45, and a novel marker of carcinogenesis in CPCs, mucin 4 (MUC4). We have also demonstrated a semi-automated approach to image analysis and CPC identification, suitable for clinical hypothesis generation. Initial results from immunocapture of a clinical pancreatic cancer patient sample show that parallel capture may capture more of the heterogeneity of the CPC population. With this platform, we aim to develop a diagnostic biomarker for early pancreatic carcinogenesis and patient risk stratification.
Nannini, M A; Wahl, D H; Philipp, D P; Cooke, S J
2011-10-01
Several traits related to foraging behaviour were assessed in young-of-the-year produced from largemouth bass Micropterus salmoides that had been exposed to four generations of artificial selection for vulnerability to angling. As recreational angling may target foraging ability, this study tested the hypothesis that selection for vulnerability to angling would affect behaviours associated with foraging ecology and prey capture success. Fish selected for low vulnerability to angling captured more prey and attempted more captures than high vulnerability fish. The higher capture attempts, however, ultimately resulted in a lower capture success for low vulnerability fish. Low vulnerability fish also had higher prey rejection rates, marginally shorter reactive distance and were more efficient at converting prey consumed into growth than their high vulnerability counterparts. Selection due to recreational fishing has the potential to affect many aspects of the foraging ecology of the targeted population and highlights the importance of understanding evolutionary effects and how these need to be considered when managing populations. © 2011 The Authors. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
High-Precision 40Ar/39Ar dating of the Deccan Traps
NASA Astrophysics Data System (ADS)
Sprain, C. J.; Renne, P. R.; Fendley, I.; Pande, K.; Self, S.; Vanderkluysen, L.; Richards, M. A.
2017-12-01
Almost forty years ago it was first hypothesized that greenhouse gases emitted from the Deccan Traps (DT) played a role in the Cretaceous-Paleogene boundary (KPB) mass extinction (McLean 1979, 1980, 1985). At that time, this hypothesis was dismissed due to insufficient geochronology and new evidence that a bolide impact coincided with the KPB. Since then, evidence such as records of protracted extinction and climate change in the Late Cretaceous, in addition to new high-precision geochronology of the DT, has bolstered the Deccan hypothesis. Recently, many models have been produced to simulate how DT volcanism may have perturbed global ecosystems. However, modeled outcomes are largely dependent upon variables such as the amount and species of gas released and the tempo of eruptions, which are not well constrained (Self et al., 2014). To better constrain climatic models and better understand the role DT volcanism played in the KPB extinction, we developed a high-precision geochronologic framework defining the timing and tempo of DT eruptions within the Western Ghats using high-precision 40Ar/39Ar geochronology. Our new results show that the DT erupted relatively continuously starting 66.4 Ma and extending to at least 65.3 Ma with no hiatuses longer than 50 ka, invalidating the concept of three discrete eruption pulses in the Western Ghats (Chenet et al., 2007, 2009; Keller et al., 2008). Our new data further provide the first precise location of the KPB within the DT sequence and place this boundary at or near the Lonavala-Wai subgroup transition, roughly coincident with major changes in eruption frequency, flow-field volumes, and extent of crustal magma contamination. Taken together, these results suggest that a state shift occurred in the DT magmatic system around the time of the Chicxulub impact, consistent with the impact triggering hypothesis of Richards et al. (2015). Our work further shows that over 80% of the estimated volume of the DT within the Western Ghats erupted in 600 ka; however, 70% of this volume, erupted after the KPB calling for a reassessment of the role of DT volcanism played in the KPB mass extinction and subsequent recovery. It is important to note that current volume estimates are likely to change as we work to improve understanding of the distribution of chemical formations, both on and offshore.
NASA Technical Reports Server (NTRS)
Thomas, S. D.; Holst, T. L.
1985-01-01
A full-potential steady transonic wing flow solver has been modified so that freestream density and residual are captured in regions of constant velocity. This numerically precise freestream consistency is obtained by slightly altering the differencing scheme without affecting the implicit solution algorithm. The changes chiefly affect the fifteen metrics per grid point, which are computed once and stored. With this new method, the outer boundary condition is captured accurately, and the smoothness of the solution is especially improved near regions of grid discontinuity.
Warren, Jeffrey M.; Cutting, Kyle A.; Takekawa, John Y.; De La Cruz, Susan E. W.; Williams, Tony D.; Koons, David N.
2014-01-01
The decision to breed influences an individual's current and future reproduction, and the proportion of individuals that breed is an important determinant of population dynamics. Age, experience, individual quality, and environmental conditions have all been demonstrated to influence breeding propensity. To elucidate which of these factors exerts the greatest influence on breeding propensity in a temperate waterfowl, we studied female Lesser Scaup (Aythya affinis) breeding in southwestern Montana. Females were captured during the breeding seasons of 2007–2009, and breeding status was determined on the basis of (1) presence of an egg in the oviduct or (2) blood plasma vitellogenin (VTG) levels. Presence on the study site in the previous year, a proxy for adult female success, was determined with stable isotope signatures of a primary feather collected at capture. Overall, 57% of females had evidence of breeding at the time of capture; this increased to 86% for females captured on or after peak nest initiation. Capture date and size-adjusted body condition positively influenced breeding propensity, with a declining body-condition threshold through the breeding season. We did not detect an influence of age on breeding propensity. Drought conditions negatively affected breeding propensity, reducing the proportion of breeding females to 0.85 (SE = 0.05) from 0.94 (SE = 0.03) during normal-water years. A female that was present in the previous breeding season was 5% more likely to breed than a female that was not present then. The positive correlation between age and experience makes it difficult to differentiate the roles of age, experience, and individual quality in reproductive success in vertebrates. Our results indicate that individual quality, as expressed by previous success and current body condition, may be among the most important determinants of breeding propensity in female Lesser Scaup, providing further support for the individual heterogeneity hypothesis.
A Unified Framework for Street-View Panorama Stitching
Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei
2016-01-01
In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481
Measuring double-electron capture with liquid xenon experiments
NASA Astrophysics Data System (ADS)
Mei, D.-M.; Marshall, I.; Wei, W.-Z.; Zhang, C.
2014-01-01
We investigate the possibilities of observing the decay mode for 124Xe in which two electrons are captured, two neutrinos are emitted, and the final daughter nucleus is in its ground state, using dark matter experiments with liquid xenon. The first upper limit of the decay half-life is calculated to be 1.66 × 1021 years at a 90% confidence level (C.L.) obtained with the published background data from the XENON100 experiment. Employing a known background model from the large underground xenon (LUX) experiment, we predict that the detection of double-electron capture of 124Xe to the ground state of 124Te with LUX will have approximately 115 events, assuming a half-life of 2.9 × 1021 years. We conclude that measuring 124Xe 2ν double-electron capture to the ground state of 124Te can be performed more precisely with the proposed LUX-Zeplin (LZ) experiment.
Plant root proliferation in nitrogen-rich patches confers competitive advantage
Robinson, D.; Hodge, A.; Griffiths, B. S.; Fitter, A. H.
1999-01-01
Plants respond strongly to environmental heterogeneity, particularly below ground, where spectacular root proliferations in nutrient-rich patches may occur. Such 'foraging' responses apparently maximize nutrient uptake and are now prominent in plant ecological theory. Proliferations in nitrogen-rich patches are difficult to explain adaptively, however. The high mobility of soil nitrate should limit the contribution of proliferation to N capture. Many experiments on isolated plants show only a weak relation between proliferation and N uptake. We show that N capture is associated strongly with proliferation during interspecific competition for finite, locally available, mixed N sources, precisely the conditions under which N becomes available to plants on generally infertile soils. This explains why N-induced root proliferation is an important resource-capture mechanism in N-limited plant communities and suggests that increasing proliferation by crop breeding or genetic manipulation will have a limited impact on N capture by well-fertilized monocultures.
[Estimation with the capture-recapture method of the number of economic immigrants in Mallorca].
Ramos Monserrat, M; March Cerdá, J C
2002-05-15
estimate the number of irregular economic immigrants in Mallorca. We used the capture-recapture method, an indirect method based on contrasts of data from two or more sources. Data were obtained from the Delegación de Gobierno (police and immigration authority), Comisiones Obreras (labor union), and institutions that provide health-related services to immigrants. Individuals were identified by birth date and country of origin. The total number of economic immigrants estimated with this method was 39 392. According to the Delegación de Gobierno data, the number of regular immigrants on the date of our inquiry was 9000. With the capture-recapture method, the number of irregular immigrants in Mallorca was therefore estimated at 30 000. The capture-recapture method can be useful to estimate the population of irregular immigrants in a given area at a given time, if sufficiently precise information on the identity of each individual can be obtained.
ERIC Educational Resources Information Center
Swan, Denise; Goswami, Usha
1997-01-01
Used picture-naming task to identify accurate/inaccurate phonological representations by dyslexic and control children; compared performance on phonological measures for words with precise/imprecise representations. Found that frequency effects in phonological tasks disappeared after considering representational quality, and that availability of…
NASA Technical Reports Server (NTRS)
Phatak, A. V.
1980-01-01
A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.
Impact of assay design on test performance: lessons learned from 25-hydroxyvitamin D.
Farrell, Christopher-John L; Soldo, Joshua; McWhinney, Brett; Bandodkar, Sushil; Herrmann, Markus
2014-11-01
Current automated immunoassays vary significantly in many aspects of their design. This study sought to establish if the theoretical advantages and disadvantages associated with different design formats of automated 25-hydroxyvitamin D (25-OHD) assays are translated into variations in assay performance in practice. 25-OHD was measured in 1236 samples using automated assays from Abbott, DiaSorin, Roche and Siemens. A subset of 362 samples had up to three liquid chromatography-tandem mass spectrometry 25-OHD analyses performed. 25-OHD₂ recovery, dilution recovery, human anti-animal antibody (HAAA) interference, 3-epi-25-OHD₃ cross-reactivity and precision of the automated assays were evaluated. The assay that combined release of 25-OHD with analyte capture in a single step showed the most accurate 25-OHD₂ recovery and the best dilution recovery. The use of vitamin D binding protein (DBP) as the capture moiety was associated with 25-OHD₂ under-recovery, a trend consistent with 3-epi-25-OHD₃ cross-reactivity and immunity to HAAA interference. Assays using animal-derived antibodies did not show 3-epi-25-OHD₃ cross-reactivity but were variably susceptible to HAAA interference. Not combining 25-OHD release and capture in one step and use of biotin-streptavidin interaction for solid phase separation were features of the assays with inferior accuracy for diluted samples. The assays that used a backfill assay format showed the best precision at high concentrations but this design did not guarantee precision at low 25-OHD concentrations. Variations in design among automated 25-OHD assays influence their performance characteristics. Consideration of the details of assay design is therefore important when selecting and validating new assays.
Suppression of biodynamic interference in head-tracked teleoperation
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.; Grunwald, A. J.; Tucker, G. E.; Tischler, M. B.
1991-01-01
The utility of helmet-tracked sights to provide pointing commands for teleoperation of cameras, lasers, or antennas in aircraft is degraded by the presence of uncommanded, involuntary heat motion, referred to as biodynamic interference. This interference limits the achievable precision required in pointing tasks. The noise contributions due to biodynamic interference consists of an additive component which is correlated with aircraft vibration and an uncorrelated, nonadditive component, referred to as remnant. An experimental simulation study is described which investigated the improvements achievable in pointing and tracking precision using dynamic display shifting in the helmet-mounted display. The experiment was conducted in a six degree of freedom motion base simulator with an emulated helmet-mounted display. Highly experienced pilot subjects performed precision head-pointing tasks while manually flying a visual flight-path tracking task. Four schemes using adaptive and low-pass filtering of the head motion were evaluated to determine their effects on task performance and pilot workload in the presence of whole-body vibration characteristic of helicopter flight. The results indicate that, for tracking tasks involving continuously moving targets, improvements of up to 70 percent can be achieved in percent on-target dwelling time and of up to 35 percent in rms tracking error, with the adaptive plus low-pass filter configuration. The results with the same filter configuration for the task of capturing randomly-positioned, stationary targets show an increase of up to 340 percent in the number of targets captured and an improvement of up to 24 percent in the average capture time. The adaptive plus low-pass filter combination was considered to exhibit the best overall display dynamics by each of the subjects.
Spachtholz, Philipp; Kuhbandner, Christof; Pekrun, Reinhard
2014-08-01
Research has shown that negative affect reduces working memory capacity. Commonly, this effect has been attributed to an allocation of resources to task-irrelevant thoughts, suggesting that negative affect has detrimental consequences for working memory performance. However, rather than simply being a detrimental effect, the affect-induced capacity reduction may reflect a trading of capacity for precision of stored representations. To test this hypothesis, we induced neutral or negative affect and concurrently measured the number and precision of representations stored in sensory and working memory. Compared with neutral affect, negative affect reduced the capacity of both sensory and working memory. However, in both memory systems, this decrease in capacity was accompanied by an increase in precision. These findings demonstrate that observers unintentionally trade capacity for precision as a function of affective state and indicate that negative affect can be beneficial for the quality of memories. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Norman G. Hicks; Michael A. Menzel; Joshua Laerm
1998-01-01
We compared inferred activity patterns of two syntopic rodents, Peromyscus Zeucopus and P. maniculatus, in western North Carolina. Activity patterns were derived from capture-frequency data obtained from Sherman live-traps equipped with digital timers following different trapping protocols. We tested the hypothesis that no...
Synaptic Tagging, Evaluation of Memories, and the Distal Reward Problem
ERIC Educational Resources Information Center
Papper, Marc; Kempter, Richard; Leibold, Christian
2011-01-01
Long-term synaptic plasticity exhibits distinct phases. The synaptic tagging hypothesis suggests an early phase in which synapses are prepared, or "tagged," for protein capture, and a late phase in which those proteins are integrated into the synapses to achieve memory consolidation. The synapse specificity of the tags is consistent with…
Fenestration: a window of opportunity for carnivorous plants.
Schaefer, H Martin; Ruxton, Graeme D
2014-01-01
A long-standing but controversial hypothesis assumes that carnivorous plants employ aggressive mimicry to increase their prey capture success. A possible mechanism is that pitcher plants use aggressive mimicry to deceive prey about the location of the pitcher's exit. Specifically, species from unrelated families sport fenestration, i.e. transparent windows on the upper surfaces of pitchers which might function to mimic the exit of the pitcher. This hypothesis has not been evaluated against alternative hypotheses predicting that fenestration functions to attract insects from afar. By manipulating fenestration, we show that it does not increase the number of Drosophila flies or of two ant species entering pitchers in Sarracenia minor nor their retention time or a pitcher's capture success. However, fenestration increased the number of Drosophila flies alighting on the pitcher compared with pitchers of the same plant without fenestration. We thus suggest that fenestration in S. minor is not an example of aggressive mimicry but rather functions in long-range attraction of prey. We highlight the need to evaluate aggressive mimicry relative to alternative concepts of plant-animal communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodwin, J. R.; Golovko, V. V.; Iacob, V. E.
2009-10-15
We have measured the half-life of the electron-capture (ec) decay of {sup 97}Ru in a metallic environment, both at low temperature (19 K), and also at room temperature. We find the half-lives at both temperatures to be the same within 0.1%. This demonstrates that a recent claim that the ec decay half-life for {sup 7}Be changes by 0.9%{+-}0.2% under similar circumstances certainly cannot be generalized to other ec decays. Our results for the half-life of {sup 97}Ru, 2.8370(14) d at room temperature and 2.8382(14) d at 19 K, are consistent with, but much more precise than, previous room-temperature measurements. Inmore » addition, we have also measured the half-lives of the {beta}{sup -}-emitters {sup 103}Ru and {sup 105}Rh at both temperatures, and found them also to be unchanged.« less
The interplay between neuronal activity and actin dynamics mimic the setting of an LTD synaptic tag
Szabó, Eszter C.; Manguinhas, Rita; Fonseca, Rosalina
2016-01-01
Persistent forms of plasticity, such as long-term depression (LTD), are dependent on the interplay between activity-dependent synaptic tags and the capture of plasticity-related proteins. We propose that the synaptic tag represents a structural alteration that turns synapses permissive to change. We found that modulation of actin dynamics has different roles in the induction and maintenance of LTD. Inhibition of either actin depolymerisation or polymerization blocks LTD induction whereas only the inhibition of actin depolymerisation blocks LTD maintenance. Interestingly, we found that actin depolymerisation and CaMKII activation are involved in LTD synaptic-tagging and capture. Moreover, inhibition of actin polymerisation mimics the setting of a synaptic tag, in an activity-dependent manner, allowing the expression of LTD in non-stimulated synapses. Suspending synaptic activation also restricts the time window of synaptic capture, which can be restored by inhibiting actin polymerization. Our results support our hypothesis that modulation of the actin cytoskeleton provides an input-specific signal for synaptic protein capture. PMID:27650071
Origin of Martian Moons from Binary Asteroid Dissociation
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Lyons, Valerie J. (Technical Monitor)
2001-01-01
The origin of the Martian moons Deimos and Phobos is controversial. A common hypothesis for their origin is that they are captured asteroids, but the moons show no signs of having been heated by passage through a (hypothetical) thick martian atmosphere, and the mechanism by which an asteroid in solar orbit could shed sufficient orbital energy to be captured into Mars orbit has not been previously elucidated. Since the discovery by the space probe Galileo that the asteroid Ida has a moon 'Dactyl', a significant number of asteroids have been discovered to have smaller asteroids in orbit about them. The existence of asteroid moons provides a mechanism for the capture of the Martian moons (and the small moons of the outer planets). When a binary asteroid makes a close approach to a planet, tidal forces can strip the moon from the asteroid. Depending on the phasing, either or both can then be captured. Clearly, the same process can be used to explain the origin of any of the small moons in the solar system.
Understanding Variables & Hypotheses in Scientific Research.
ERIC Educational Resources Information Center
Charters, W. W., Jr.
The hypothesis is the device scientists use to translate questions, theories, or proposed explanations into a form amenable to empirical research. This edition of W. W. Charter's treatise on clear, conceptual definitions and precise operational hypotheses, which was originally developed to assist students in educational policy and management…
A closure test for time-specific capture-recapture data
Stanley, T.R.; Burnham, K.P.
1999-01-01
The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.
Validation of adipose lipid content as a body condition index for polar bears
McKinney, Melissa A.; Atwood, Todd C.; Dietz, Rune; Sonne, Christian; Iverson, Sara J.; Peacock, Elizabeth
2014-01-01
Body condition is a key indicator of individual and population health. Yet, there is little consensus as to the most appropriate condition index (CI), and most of the currently used CIs have not been thoroughly validated and are logistically challenging. Adipose samples from large datasets of capture biopsied, remote biopsied, and harvested polar bears were used to validate adipose lipid content as a CI via tests of accuracy, precision, sensitivity, biopsy depth, and storage conditions and comparisons to established CIs, to measures of health and to demographic and ecological parameters. The lipid content analyses of even very small biopsy samples were highly accurate and precise, but results were influenced by tissue depth at which the sample was taken. Lipid content of capture biopsies and samples from harvested adult females was correlated with established CIs and/or conformed to expected biological variation and ecological changes. However, lipid content of remote biopsies was lower than capture biopsies and harvested samples, possibly due to lipid loss during dart retrieval. Lipid content CI is a biologically relevant, relatively inexpensive and rapidly assessed CI and can be determined routinely for individuals and populations in order to infer large-scale spatial and long-term temporal trends. As it is possible to collect samples during routine harvesting or remotely using biopsy darts, monitoring and assessment of body condition can be accomplished without capture and handling procedures or noninvasively, which are methods that are preferred by local communities. However, further work is needed to apply the method to remote biopsies.
Hui, Lanlan; Su, Yi; Ye, Tingting; Liu, Zhao; Tian, Qingchang; He, Chuanjiang; Zhao, Yueqi; Chen, Pu; Wang, Xiaojia; Han, Weidong; Luo, Yan; Wang, Ben
2018-01-10
Cancer cells metastasize and are transported in the bloodstream, easily reaching any site in the body through the blood circulation. A method designed to assess the number of circulating tumor cells (CTCs) should be validated as a clinical tool for predicting the response to therapy and monitoring the disease progression in patients with cancer. Although CTCs are detectable in many cases, they remain unavailable for clinic usage because of their high testing cost, tedious operation, and poor clinical relevance. Herein, we developed a regeneratable microchip for isolating CTCs, which is available for robust cell heterogeneity assays on-site without the need for a sterile environment. The ivy-like hierarchical roughened zinc oxide (ZnO) nanograss interface was synthesized and directly integrated into the microfluidic devices and enables effective CTC capture and flexible, nontoxic CTC release during incubation in a mildly acidic solution, thus enabling cellular and molecular analyses. The microchip can be regenerated and recycled to capture CTCs with the remaining ZnO without affecting the efficiency, even after countless cycles of cell release. Moreover, microbial infection is avoided during its storage, distribution, and even in the open space usage, which ideally appeals to the demands of point-of-care (POC) and home testing and meets to the requirements for blood examinations in undeveloped or resource-limited settings. Furthermore, the findings generated using this platform based on the cocktail of antiepithelial cell adhesion molecule and antivimentin antibodies indicate that CTC capture was more precise and reasonable for patients with advanced cancer.
Validation of adipose lipid content as a body condition index for polar bears
McKinney, Melissa A; Atwood, Todd; Dietz, Rune; Sonne, Christian; Iverson, Sara J; Peacock, Elizabeth
2014-01-01
Body condition is a key indicator of individual and population health. Yet, there is little consensus as to the most appropriate condition index (CI), and most of the currently used CIs have not been thoroughly validated and are logistically challenging. Adipose samples from large datasets of capture biopsied, remote biopsied, and harvested polar bears were used to validate adipose lipid content as a CI via tests of accuracy, precision, sensitivity, biopsy depth, and storage conditions and comparisons to established CIs, to measures of health and to demographic and ecological parameters. The lipid content analyses of even very small biopsy samples were highly accurate and precise, but results were influenced by tissue depth at which the sample was taken. Lipid content of capture biopsies and samples from harvested adult females was correlated with established CIs and/or conformed to expected biological variation and ecological changes. However, lipid content of remote biopsies was lower than capture biopsies and harvested samples, possibly due to lipid loss during dart retrieval. Lipid content CI is a biologically relevant, relatively inexpensive and rapidly assessed CI and can be determined routinely for individuals and populations in order to infer large-scale spatial and long-term temporal trends. As it is possible to collect samples during routine harvesting or remotely using biopsy darts, monitoring and assessment of body condition can be accomplished without capture and handling procedures or noninvasively, which are methods that are preferred by local communities. However, further work is needed to apply the method to remote biopsies. PMID:24634735
Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.
Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia
2016-01-01
A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.
On the value-dependence of value-driven attentional capture.
Anderson, Brian A; Halpern, Madeline
2017-05-01
Findings from an increasingly large number of studies have been used to argue that attentional capture can be dependent on the learned value of a stimulus, or value-driven. However, under certain circumstances attention can be biased to select stimuli that previously served as targets, independent of reward history. Value-driven attentional capture, as studied using the training phase-test phase design introduced by Anderson and colleagues, is widely presumed to reflect the combined influence of learned value and selection history. However, the degree to which attentional capture is at all dependent on value learning in this paradigm has recently been questioned. Support for value-dependence can be provided through one of two means: (1) greater attentional capture by prior targets following rewarded training than following unrewarded training, and (2) greater attentional capture by prior targets previously associated with high compared to low value. Using a variant of the original value-driven attentional capture paradigm, Sha and Jiang (Attention, Perception, and Psychophysics, 78, 403-414, 2016) failed to find evidence of either, and raised criticisms regarding the adequacy of evidence provided by prior studies using this particular paradigm. To address this disparity, here we provided a stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm. With a sufficiently large sample size, value-dependence was observed based on both criteria, with no evidence of attentional capture without rewards during training. Our findings support the validity of the traditional value-driven attentional capture paradigm in measuring what its name purports to measure.
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
Disaggregating soil erosion processes within an evolving experimental landscape
USDA-ARS?s Scientific Manuscript database
Soil-mantled landscapes subjected to rainfall, runoff events, and downstream base level adjustments will erode and evolve in time and space. Yet the precise mechanisms for soil erosion also will vary, and such variations may not be adequately captured by soil erosion prediction technology. This st...
NASA Astrophysics Data System (ADS)
Lakota, Gregory J.; Essary, Andrew; Bast, William D.; Dicaprio, Ralph; Symmes, Arthur H.; McDonald, Edward T.
2006-11-01
An underground exhibit space constructed at Chicago's Museum of Science and Industry now serves as the home of the German submarine U-505 -- the only vessel of its class captured by the United States during World War II. The careful lifting and moving of the vessel required precise coordination and meticulous reviews.
A generalised random encounter model for estimating animal density with remote sensor data.
Lucas, Tim C D; Moorcroft, Elizabeth A; Freeman, Robin; Rowcliffe, J Marcus; Jones, Kate E
2015-05-01
Wildlife monitoring technology is advancing rapidly and the use of remote sensors such as camera traps and acoustic detectors is becoming common in both the terrestrial and marine environments. Current methods to estimate abundance or density require individual recognition of animals or knowing the distance of the animal from the sensor, which is often difficult. A method without these requirements, the random encounter model (REM), has been successfully applied to estimate animal densities from count data generated from camera traps. However, count data from acoustic detectors do not fit the assumptions of the REM due to the directionality of animal signals.We developed a generalised REM (gREM), to estimate absolute animal density from count data from both camera traps and acoustic detectors. We derived the gREM for different combinations of sensor detection widths and animal signal widths (a measure of directionality). We tested the accuracy and precision of this model using simulations of different combinations of sensor detection widths and animal signal widths, number of captures and models of animal movement.We find that the gREM produces accurate estimates of absolute animal density for all combinations of sensor detection widths and animal signal widths. However, larger sensor detection and animal signal widths were found to be more precise. While the model is accurate for all capture efforts tested, the precision of the estimate increases with the number of captures. We found no effect of different animal movement models on the accuracy and precision of the gREM.We conclude that the gREM provides an effective method to estimate absolute animal densities from remote sensor count data over a range of sensor and animal signal widths. The gREM is applicable for count data obtained in both marine and terrestrial environments, visually or acoustically (e.g. big cats, sharks, birds, echolocating bats and cetaceans). As sensors such as camera traps and acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring unmarked animal populations across broad spatial, temporal and taxonomic scales.
Bragança, F M; Bosch, S; Voskamp, J P; Marin-Perianu, M; Van der Zwaag, B J; Vernooij, J C M; van Weeren, P R; Back, W
2017-07-01
Inertial measurement unit (IMU) sensor-based techniques are becoming more popular in horses as a tool for objective locomotor assessment. To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU sensors. Prospective validation study comparing IMU sensors and motion capture with force plate data. A total of seven Warmblood horses equipped with metacarpal/metatarsal IMU sensors and reflective markers for motion capture were hand walked and trotted over a force plate. Using four custom built algorithms hoof-on/hoof-off timing over the force plate were calculated for each trial from the IMU data. Accuracy of the computed parameters was calculated as the mean difference in milliseconds between the IMU or motion capture generated data and the data from the force plate, precision as the s.d. of these differences and percentage of error with accuracy of the calculated parameter as a percentage of the force plate stance duration. Accuracy, precision and percentage of error of the best performing IMU algorithm for stance duration at walk were 28.5, 31.6 ms and 3.7% for the forelimbs and -5.5, 20.1 ms and -0.8% for the hindlimbs, respectively. At trot the best performing algorithm achieved accuracy, precision and percentage of error of -27.6/8.8 ms/-8.4% for the forelimbs and 6.3/33.5 ms/9.1% for the hindlimbs. The described algorithms have not been assessed on different surfaces. Inertial measurement unit technology can be used to determine temporal kinematic stride variables at walk and trot justifying its use in gait and performance analysis. However, precision of the method may not be sufficient to detect all possible lameness-related changes. These data seem promising enough to warrant further research to evaluate whether this approach will be useful for appraising the majority of clinically relevant gait changes encountered in practice. © 2016 The Authors. Equine Veterinary Journal published by John Wiley & Sons Ltd on behalf of EVJ Ltd.
Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment
NASA Astrophysics Data System (ADS)
Chapman, Michael A.; Min, Cao; Zhang, Deijin
2016-06-01
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290
Ultrafast chirped optical waveform recording using referenced heterodyning and a time microscope
Bennett, Corey Vincent
2010-06-15
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Ultrafast chirped optical waveform recorder using referenced heterodyning and a time microscope
Bennett, Corey Vincent [Livermore, CA
2011-11-22
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Privacy-preserving screen capture: towards closing the loop for health IT usability.
Cooley, Joseph; Smith, Sean
2013-08-01
As information technology permeates healthcare (particularly provider-facing systems), maximizing system effectiveness requires the ability to document and analyze tricky or troublesome usage scenarios. However, real-world health IT systems are typically replete with privacy-sensitive data regarding patients, diagnoses, clinicians, and EMR user interface details; instrumentation for screen capture (capturing and recording the scenario depicted on the screen) needs to respect these privacy constraints. Furthermore, real-world health IT systems are typically composed of modules from many sources, mission-critical and often closed-source; any instrumentation for screen capture can rely neither on access to structured output nor access to software internals. In this paper, we present a tool to help solve this problem: a system that combines keyboard video mouse (KVM) capture with automatic text redaction (and interactively selectable unredaction) to produce precise technical content that can enrich stakeholder communications and improve end-user influence on system evolution. KVM-based capture makes our system both application-independent and OS-independent because it eliminates software-interface dependencies on capture targets. Using a corpus of EMR screenshots, we present empirical measurements of redaction effectiveness and processing latency to demonstrate system performances. We discuss how these techniques can translate into instrumentation systems that improve real-world health IT deployments. Copyright © 2013 Elsevier Inc. All rights reserved.
Froese, Tom; Leavens, David A.
2014-01-01
We argue that imitation is a learning response to unintelligible actions, especially to social conventions. Various strands of evidence are converging on this conclusion, but further progress has been hampered by an outdated theory of perceptual experience. Comparative psychology continues to be premised on the doctrine that humans and non-human primates only perceive others’ physical “surface behavior,” while mental states are perceptually inaccessible. However, a growing consensus in social cognition research accepts the direct perception hypothesis: primarily we see what others aim to do; we do not infer it from their motions. Indeed, physical details are overlooked – unless the action is unintelligible. On this basis we hypothesize that apes’ propensity to copy the goal of an action, rather than its precise means, is largely dependent on its perceived intelligibility. Conversely, children copy means more often than adults and apes because, uniquely, much adult human behavior is completely unintelligible to unenculturated observers due to the pervasiveness of arbitrary social conventions, as exemplified by customs, rituals, and languages. We expect the propensity to imitate to be inversely correlated with the familiarity of cultural practices, as indexed by age and/or socio-cultural competence. The direct perception hypothesis thereby helps to parsimoniously explain the most important findings of imitation research, including children’s over-imitation and other species-typical and age-related variations. PMID:24600413
Capturing Young American Trust in National Databases
ERIC Educational Resources Information Center
Menard, Lauren A.
2011-01-01
A pattern of decreasing trusting proportions in each consecutive decade and increasing trusting proportions with age was revealed in data. Although trust levels were lower in younger adults and the 2000s, findings did not support hypotheses of more rapidly falling trust levels or a college degree procuring less trust in the 2000s. A hypothesis of…
Auditory and visual capture during focused visual attention.
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-10-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Comparison of water vapor from observations and models in the Asian Monsoon UTLS region
NASA Astrophysics Data System (ADS)
Singer, C. E.; Clouser, B.; Gaeta, D. C.; Moyer, E. J.
2017-12-01
As part of the StratoClim campaign in July/August 2017, the Chicago Water Isotope Spectrometer (ChiWIS) made water vapor measurements from the mid-troposphere through the lower stratosphere (to 21 km altitude). We compare in-situ measurements with remote sensing observations and model projections both to validate measurements and to evalute the added value of high-precision in-situ sampling. Preliminary results and comparison with other StratoClim tracer measurements suggest that the UTLS region is highly structured, beyond what models or satellite instruments can capture, and that ChiWIS accurately captures these variations.
ERIC Educational Resources Information Center
Fouse, Adam S.
2013-01-01
Advances in technology now make it possible to capture detailed multimodal data about real-world everyday activity. Researchers have taken advantage of these advances to address questions about activity in more systematic and precise ways. Along with exciting opportunities to record data in ways that were not possible before, there are also…
Putting Parameters in Their Proper Place
ERIC Educational Resources Information Center
Montrul, Silvina; Yoon, James
2009-01-01
Seeing the logical problem of second language acquisition as that of primarily selecting and re-assembling bundles of features anew, Lardiere proposes to dispense with the deductive learning approach and its broad range of consequences subsumed under the concept of parameters. While we agree that feature assembly captures more precisely the…
Hämmerle, Martin; Höfle, Bernhard
2014-01-01
3D geodata play an increasingly important role in precision agriculture, e.g., for modeling in-field variations of grain crop features such as height or biomass. A common data capturing method is LiDAR, which often requires expensive equipment and produces large datasets. This study contributes to the improvement of 3D geodata capturing efficiency by assessing the effect of reduced scanning resolution on crop surface models (CSMs). The analysis is based on high-end LiDAR point clouds of grain crop fields of different varieties (rye and wheat) and nitrogen fertilization stages (100%, 50%, 10%). Lower scanning resolutions are simulated by keeping every n-th laser beam with increasing step widths n. For each iteration step, high-resolution CSMs (0.01 m2 cells) are derived and assessed regarding their coverage relative to a seamless CSM derived from the original point cloud, standard deviation of elevation and mean elevation. Reducing the resolution to, e.g., 25% still leads to a coverage of >90% and a mean CSM elevation of >96% of measured crop height. CSM types (maximum elevation or 90th-percentile elevation) react differently to reduced scanning resolutions in different crops (variety, density). The results can help to assess the trade-off between CSM quality and minimum requirements regarding equipment and capturing set-up. PMID:25521383
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Estimation of sex-specific survival from capture-recapture data when sex is not always known
Nichols, J.D.; Kendall, W.L.; Hines, J.E.; Spendelow, J.A.
2004-01-01
Many animals lack obvious sexual dimorphism, making assignment of sex difficult even for observed or captured animals. For many such species it is possible to assign sex with certainty only at some occasions; for example, when they exhibit certain types of behavior. A common approach to handling this situation in capture-recapture studies has been to group capture histories into those of animals eventually identified as male and female and those for which sex was never known. Because group membership is dependent on the number of occasions at which an animal was caught or observed (known sex animals, on average, will have been observed at more occasions than unknown-sex animals), survival estimates for known-sex animals will be positively biased, and those for unknown animals will be negatively biased. In this paper, we develop capture-recapture models that incorporate sex ratio and sex assignment parameters that permit unbiased estimation in the face of this sampling problem. We demonstrate the magnitude of bias in the traditional capture-recapture approach to this sampling problem, and we explore properties of estimators from other ad hoc approaches. The model is then applied to capture-recapture data for adult Roseate Terns (Sterna dougallii) at Falkner Island, Connecticut, 1993-2002. Sex ratio among adults in this population favors females, and we tested the hypothesis that this population showed sex-specific differences in adult survival. Evidence was provided for higher survival of adult females than males, as predicted. We recommend use of this modeling approach for future capture-recapture studies in which sex cannot always be assigned to captured or observed animals. We also place this problem in the more general context of uncertainty in state classification in multistate capture-recapture models.
Initial conditions for cosmological perturbations
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay; Gupt, Brajesh
2017-02-01
Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations.
Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution
Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.
2005-01-01
We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.
Combining multistate capture-recapture data with tag recoveries to estimate demographic parameters
Kendall, W.L.; Conn, P.B.; Hines, J.E.
2006-01-01
Matrix population models that allow an animal to occupy more than one state over time are important tools for population and evolutionary ecologists. Definition of state can vary, including location for metapopulation models and breeding state for life history models. For populations whose members can be marked and subsequently re-encountered, multistate mark-recapture models are available to estimate the survival and transition probabilities needed to construct population models. Multistate models have proved extremely useful in this context, but they often require a substantial amount of data and restrict estimation of transition probabilities to those areas or states subjected to formal sampling effort. At the same time, for many species, there are considerable tag recovery data provided by the public that could be modeled in order to increase precision and to extend inference to a greater number of areas or states. Here we present a statistical model for combining multistate capture-recapture data (e.g., from a breeding ground study) with multistate tag recovery data (e.g., from wintering grounds). We use this method to analyze data from a study of Canada Geese (Branta canadensis) in the Atlantic Flyway of North America. Our analysis produced marginal improvement in precision, due to relatively few recoveries, but we demonstrate how precision could be further improved with increases in the probability that a retrieved tag is reported.
Multiplatform Mobile Laser Scanning: Usability and Performance
Kukko, Antero; Kaartinen, Harri; Hyyppä, Juha; Chen, Yuwei
2012-01-01
Mobile laser scanning is an emerging technology capable of capturing three-dimensional data from surrounding objects. With state-of-the-art sensors, the achieved point clouds capture object details with good accuracy and precision. Many of the applications involve civil engineering in urban areas, as well as traffic and other urban planning, all of which serve to make 3D city modeling probably the fastest growing market segment in this field. This article outlines multiplatform mobile laser scanning solutions such as vehicle- and trolley-operated urban area data acquisition, and boat-mounted equipment for fluvial environments. Moreover, we introduce a novel backpack version of mobile laser scanning equipment for surveying applications in the field of natural sciences where the requirements include precision and mobility in variable terrain conditions. In addition to presenting a technical description of the systems, we discuss the performance of the solutions in the light of various applications in the fields of urban mapping and modeling, fluvial geomorphology, snow-cover characterization, precision agriculture, and in monitoring the effects of climate change on permafrost landforms. The data performance of the mobile laser scanning approach is described by the results of an evaluation of the ROAMER on a permanent MLS test field. Furthermore, an in situ accuracy assessment using a field of spherical 3D targets for the newly-introduced Akhka backpack system is conducted and reported on.
New instrumentation for precise (n,γ) measurements at ILL Grenoble
NASA Astrophysics Data System (ADS)
Urban, W.; Jentschel, M.; Märkisch, B.; Materna, Th; Bernards, Ch; Drescher, C.; Fransen, Ch; Jolie, J.; Köster, U.; Mutti, P.; Rzaca-Urban, T.; Simpson, G. S.
2013-03-01
An array of eight Ge detectors for coincidence measurements of γ rays from neutron-capture reactions has been constructed at the PF1B cold-neutron facility of the Institut Laue-Langevin. The detectors arranged in one plane every 45° can be used for angular correlation measurements. The neutron collimation line of the setup provides a neutron beam of 12 mm in diameter and the capture flux of about 108/(s × cm2) at the target position, with a negligible neutron halo. With the setup up to 109 γγ and up to 108 triple-γ coincidence events have been collected in a day measurement. Precise energy and efficiency calibrations up to 10 MeV are easily performed with 27Al(n,γ)28Al and 35Cl(n,γ)36Cl reactions. Test measurements have shown that neutron binding energies can be determined with an accuracy down to a few eV and angular correlation coefficients measured with a precision down to a percent level. The triggerless data collected with a digital electronics and acquisition allows to determine half-lives of excited levels in the nano- to microsecond range. The high resolving power of double- and triple-γ time coincidences allows significant improvements of excitation schemes reported in previous (n,γ) works and complements high-resolution γ-energy measurements at the double-crystal Bragg spectrometer GAMS of ILL.
A precise laboratory goniometer system to collect spectral BRDF data of materials
NASA Astrophysics Data System (ADS)
Jiao, Guangping; Jiao, Ziti; Wang, Jie; Zhang, Hu; Dong, Yadong
2014-11-01
This paper presents a precise laboratory goniometer system to quickly collect bidirectional reflectance distribution factor(BRDF)of typical materials such soil, canopy and artificial materials in the laboratory. The system consists of the goniometer, SVC HR1024 spectroradiometer, and xenon long-arc lamp as light source. the innovation of cantilever slab can reduce the shadow of the goniometer in the principle plane. The geometric precision of the footprint centre is better than +/-4cm in most azimuth directions, and the angle-controlling accuracy is better than 0.5°. The light source keeps good stability, with 0.8% irradiance decrease in 3 hours. But the large areal heterogeneity of the light source increase the data processing difficulty to capture the accurate BRDF. First measurements are taken from soil in a resolution of 15° and 30° in zenith and azimuth direction respectively, with the +/-50° biggest view angle. More observations are taken in the hot-spot direction. The system takes about 40 minutes to complete all measurements. A spectralon panel is measured at the beginning and end of the whole period. A simple interactive interface on the computer can automatically control all operations of the goniometer and data-processing. The laboratory experiment of soil layer and grass lawn shows that the goniometer can capture the the multi-angle variation of BRDF.
Operationalizing biodiversity for conservation planning.
Sarkar, Sahotra; Margules, Chris
2002-07-01
Biodiversity has acquired such a general meaning that people now find it difficult to pin down a precise sense for planning and policy-making aimed at biodiversity conservation. Because biodiversity is rooted in place, the task of conserving biodiversity should target places for conservation action; and because all places contain biodiversity, but not all places can be targeted for action, places have to be prioritized. What is needed for this is a measure of the extent to which biodiversity varies from place to place. We do not need a precise measure of biodiversity to prioritize places. Relative estimates of similarity or difference can be derived using partial measures, or what have come to be called biodiversity surrogates. Biodiversity surrogates are supposed to stand in for general biodiversity in planning applications. We distinguish between true surrogates, those that might truly stand in for general biodiversity, and estimator surrogates, which have true surrogates as their target variable. For example, species richness has traditionally been the estimator surrogate for the true surrogate, species diversity. But species richness does not capture the differences in composition between places; the essence of biodiversity. Another measure, called complementarity, explicitly captures the differences between places as we iterate the process of place prioritization, starting with an initial place. The relative concept of biodiversity built into the definition of complementarity has the level of precision needed to undertake conservation planning.
Neutron Capture Measurements on 97Mo with the DANCE Array
NASA Astrophysics Data System (ADS)
Walker, Carrie L.
Neutron capture is a process that is crucial to understanding nucleosynthesis, reactors, and nuclear weapons. Precise knowledge of neutron capture cross-sections and level densities is necessary in order to model these high-flux environments. High-confidence spin and parity assignments for neutron resonances are of critical importance to this end. For nuclei in the A=100 mass region, the p-wave neutron strength function is at a maximum, and the s-wave strength function is at a minimum, producing up to six possible Jpi combinations. Parity determination becomes important to assigning spins in this mass region, and the large number of spin groups adds complexity to the problem. In this work, spins and parities for 97Mo resonances are assigned, and best fit models for photon strength function and level density are determined. The neutron capture-cross section for 97Mo is also determined, as are resonance parameters for neutron energies ranging from 16 eV to 2 keV.
Cui, Haijun; Wang, Binshuai; Wang, Wenshuo; Hao, Yuwei; Liu, Chuanyong; Song, Kai; Zhang, Shudong; Wang, Shutao
2018-06-13
Developing low-cost and highly efficient nanobiochips are important for liquid biopsies, real-time monitoring, and precision medicine. By in situ growth of silica nanowires on a commercial frosted slide, we develop a biochip for effective circulating tumor cells (CTCs) detection after modifying epithelial cell adhesion molecule antibody (anti-EpCAM). The biochip shows the specificity and high capture efficiency of 85.4 ± 8.3% for prostate cancer cell line (PC-3). The microsized frosted slides and silica nanowires allow enhanced efficiency in capture EpCAM positive cells by synergistic topographic interactions. And the capture efficiency of biochip increased with the increase of silica nanowires length on frosted slide. The biochip shows that micro/nanocomposite structures improve the capture efficiency of PC-3 more than 70% toward plain slide. Furthermore, the nanobiochip has been successfully applied to identify CTCs from whole blood specimens of prostate cancer patients. Thus, this frosted slide-based biochip may provide a cheap and effective way of clinical monitoring of CTCs.
Chabaud, Mélanie; Heuzé, Mélina L.; Bretou, Marine; Vargas, Pablo; Maiuri, Paolo; Solanes, Paola; Maurin, Mathieu; Terriac, Emmanuel; Le Berre, Maël; Lankar, Danielle; Piolot, Tristan; Adelstein, Robert S.; Zhang, Yingfan; Sixt, Michael; Jacobelli, Jordan; Bénichou, Olivier; Voituriez, Raphaël; Piel, Matthieu; Lennon-Duménil, Ana-Maria
2015-01-01
The immune response relies on the migration of leukocytes and on their ability to stop in precise anatomical locations to fulfil their task. How leukocyte migration and function are coordinated is unknown. Here we show that in immature dendritic cells, which patrol their environment by engulfing extracellular material, cell migration and antigen capture are antagonistic. This antagonism results from transient enrichment of myosin IIA at the cell front, which disrupts the back-to-front gradient of the motor protein, slowing down locomotion but promoting antigen capture. We further highlight that myosin IIA enrichment at the cell front requires the MHC class II-associated invariant chain (Ii). Thus, by controlling myosin IIA localization, Ii imposes on dendritic cells an intermittent antigen capture behaviour that might facilitate environment patrolling. We propose that the requirement for myosin II in both cell migration and specific cell functions may provide a general mechanism for their coordination in time and space. PMID:26109323
Developmental differences in masked form priming are not driven by vocabulary growth.
Bhide, Adeetee; Schlaggar, Bradley L; Barnes, Kelly Anne
2014-01-01
As children develop into skilled readers, they are able to more quickly and accurately distinguish between words with similar visual forms (i.e., they develop precise lexical representations). The masked form priming lexical decision task is used to test the precision of lexical representations. In this paradigm, a prime (which differs by one letter from the target) is briefly flashed before the target is presented. Participants make a lexical decision to the target. Primes can facilitate reaction time by partially activating the lexical entry for the target. If a prime is unable to facilitate reaction time, it is assumed that participants have a precise orthographic representation of the target and thus the prime is not a close enough match to activate its lexical entry. Previous developmental work has shown that children and adults' lexical decision times are facilitated by form primes preceding words from small neighborhoods (i.e., very few words can be formed by changing one letter in the original word; low N words), but only children are facilitated by form primes preceding words from large neighborhoods (high N words). It has been hypothesized that written vocabulary growth drives the increase in the precision of the orthographic representations; children may not know all of the neighbors of the high N words, making the words effectively low N for them. We tested this hypothesis by (1) equating the effective orthographic neighborhood size of the targets for children and adults and (2) testing whether age or vocabulary size was a better predictor of the extent of form priming. We found priming differences even when controlling for effective neighborhood size. Furthermore, age was a better predictor of form priming effects than was vocabulary size. Our findings provide no support for the hypothesis that growth in written vocabulary size gives rise to more precise lexical representations. We propose that the development of spelling ability may be a more important factor.
Memory Inhibition, Aging, and the Executive Deficit Hypothesis
ERIC Educational Resources Information Center
Ortega, Almudena; Gomez-Ariza, Carlos J.; Roman, Patricia; Bajo, M. Teresa
2012-01-01
Although memory inhibition seems to underlie retrieval-induced forgetting (RIF), there is some controversy about the precise nature of this effect. Because normal RIF is observed in people with deficits in executive control (i.e., older adults), some have proposed that an automatic-like inhibitory process is responsible for the effect. On the…
ERIC Educational Resources Information Center
Faraone, Stephen V.
2012-01-01
Objective: An earlier meta-analysis of pediatric clinical trials indicated that lisdexamfetamine dimesylate (LDX) had a greater effect size than other stimulant medications. This work tested the hypothesis that the apparent increased efficacy was artifactual. Method: The authors assessed two potential artifacts: an unusually high precision of…
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Working memory recall precision is a more sensitive index than span.
Zokaei, Nahid; Burnett Heyes, Stephanie; Gorgoraptis, Nikos; Budhdeo, Sanjay; Husain, Masud
2015-09-01
Delayed adjustment tasks have recently been developed to examine working memory (WM) precision, that is, the resolution with which items maintained in memory are recalled. However, despite their emerging use in experimental studies of healthy people, evaluation of patient populations is sparse. We first investigated the validity of adjustment tasks, comparing precision with classical span measures of memory across the lifespan in 114 people. Second, we asked whether precision measures can potentially provide a more sensitive measure of WM than traditional span measures. Specifically, we tested this hypothesis examining WM in a group with early, untreated Parkinson's disease (PD) and its modulation by subsequent treatment on dopaminergic medication. Span measures correlated with precision across the lifespan: in children, young, and elderly participants. However, they failed to detect changes in WM in PD patients, either pre- or post-treatment initiation. By contrast, recall precision was sensitive enough to pick up such changes. PD patients pre-medication were significantly impaired compared to controls, but improved significantly after 3 months of being established on dopaminergic medication. These findings suggest that precision methods might provide a sensitive means to investigate WM and its modulation by interventions in clinical populations. © 2014 The Authors Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
The Paradox of Abstraction: Precision Versus Concreteness.
Iliev, Rumen; Axelrod, Robert
2017-06-01
We introduce a novel measure of abstractness based on the amount of information of a concept computed from its position in a semantic taxonomy. We refer to this measure as precision. We propose two alternative ways to measure precision, one based on the path length from a concept to the root of the taxonomic tree, and another one based on the number of direct and indirect descendants. Since more information implies greater processing load, we hypothesize that nouns higher in precision will have a processing disadvantage in a lexical decision task. We contrast precision to concreteness, a common measure of abstractness based on the proportion of sensory-based information associated with a concept. Since concreteness facilitates cognitive processing, we predict that while both concreteness and precision are measures of abstractness, they will have opposite effects on performance. In two studies we found empirical support for our hypothesis. Precision and concreteness had opposite effects on latency and accuracy in a lexical decision task, and these opposite effects were observable while controlling for word length, word frequency, affective content and semantic diversity. Our results support the view that concepts organization includes amodal semantic structures which are independent of sensory information. They also suggest that we should distinguish between sensory-based and amount-of-information-based abstractness.
Spin-orbit evolution of Mercury revisited
NASA Astrophysics Data System (ADS)
Noyelles, Benoît; Frouard, Julien; Makarov, Valeri V.; Efroimsky, Michael
2014-10-01
Although it is accepted that the significant eccentricity of Mercury (0.206) favours entrapment into the 3:2 spin-orbit resonance, open are the questions of how and when the capture took place. A recent work by Makarov (Makarov, V.V. [2012]. Astrophys. J., 752, 73) has proven that trapping into this state is certain for eccentricities larger than 0.2, provided we use a realistic tidal model based on the Darwin-Kaula expansion of the tidal torque. While in Ibid. a Mercury-like planet had its eccentricity fixed, we take into account its evolution. To that end, a family of possible histories of the eccentricity is generated, based on synthetic time evolution consistent with the expected statistics of the distribution of eccentricity. We employ a model of tidal friction, which takes into account both the rheology and self-gravitation of the planet. As opposed to the commonly used constant time lag (CTL) and constant phase lag (CPL) models, the physics-based tidal model changes dramatically the statistics of the possible final spin states. First, we discover that after only one encounter with the spin-orbit 3:2 resonance this resonance becomes the most probable end-state. Second, if a capture into this (or any other) resonance takes place, the capture becomes final, several crossings of the same state being forbidden by our model. Third, within our model the trapping of Mercury happens much faster than previously believed: for most histories, 10-20 Myr are sufficient. Fourth, even a weak laminar friction between the solid mantle and a molten core would most likely result in a capture in the 2:1 or even higher resonance, which is confirmed both semi-analytically and by limited numerical simulations. So the principal novelty of our paper is that the 3:2 end-state is more ancient than the same end-state obtained when the constant time lag model is employed. The swift capture justifies our treatment of Mercury as a homogeneous, unstratified body whose liquid core had not yet formed by the time of trapping. We also provide a critical analysis of the hypothesis by Wieczorek et al. (Wieczorek, M.A., Correia, A.C.M., Le Feuvre, M., Laskar, J., Rambaux, N. [2012]. Nat. Geosci., 5, 18-21) that the early Mercury might had been retrograde, whereafter it synchronised its spin and then accelerated it to the 3:2 resonance. Accurate processing of the available data on cratering does not support that hypothesis, while the employment of a realistic rheology invalidates a key element of the hypothesis, an intermediate pseudosynchronous state needed to spin-up to the 3:2 resonance.
Revolution of Alzheimer Precision Neurology Passageway of Systems Biology and Neurophysiology.
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L; Bokde, Arun L W; Bun, René S; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-03-16
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular, and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an "omics"-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical, and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer's disease. The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group "Alzheimer Precision Medicine" (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development toward breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND.
Revolution of Alzheimer Precision Neurology: Passageway of Systems Biology and Neurophysiology
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L.; Bokde, Arun L.W.; Bun, René S.; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A.; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R.; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-01-01
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an “omics”-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer’s disease (AD). The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group “Alzheimer Precision Medicine” (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development towards breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND. PMID:29562524
Origin of the moon - The collision hypothesis
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun
2015-04-01
Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.
ERIC Educational Resources Information Center
Udden, Julia; Ingvar, Martin; Hagoort, Peter; Petersson, Karl M.
2012-01-01
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study…
Skill Acquisition: Compilation of Weak-Method Problem Solutions.
1985-08-12
difference largely disappears by the fourth day when they are still working - with Perverse EMACS. Compared to Day 1 on EMACS. there is large postive ...This reinforces the idea that production representation captures significant features of our procedural knowledge and that differences between...memory load is certainly consistent with the working memory plus production system hypothesis. Immediate Feedback The importance of immediate
Therese M. Poland; J. H. Borden; A. J. Stock; L. J. Chong
1998-01-01
We tested the hypothesis that green leaf volatiles (GLVs) disrupt the response of spruce beetles, Dendroctonus rufipennis Kirby, and western pine beetles, Dendroctonus brevicomis LeConte, to attraetant-baited traps. Two green leaf aldehydes, hexanal and (E)-2-hexenal, reduced the number of spruce beetles captured...
NASA Astrophysics Data System (ADS)
Fomin, Nadia
2012-03-01
The NPDGamma experiment aims to measure the parity-odd correlation between the neutron spin and the direction of the emitted photon in neutron-proton capture. A parity violating asymmetry (to be measured to 10-8) from this process can be directly related to the strength of the hadronic weak interaction between nucleons. As part of the commissioning runs on the Fundamental Neutron Physics beamline at the Spallation Neutron Source at ORNL, the gamma-ray asymmetry from the parity-violating capture of cold neutrons on ^35Cl was measured, primarily to check for systematic effects and false asymmtries. The current precision from existing world measurements on this asymmetry is at the level of 10-6 and we believe we can improve it. The analysis methodology as well as preliminary results will be presented.
Genetics and recent human evolution.
Templeton, Alan R
2007-07-01
Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.
Effects of peanut stand uniformity and herbicide regime on weed management and yield
USDA-ARS?s Scientific Manuscript database
Crop stand directly affects ability of any crop to compete with weeds. To capture this form of cultural weed control, final crop stands need to be uniform. Peanut stands are frequently non-uniform, despite the use of precision vacuum planters. Trials were conducted from 2009 through 2011 in Tifto...
Automated mosaicking of sub-canopy video incorporating ancillary data
E. Kee; N.E. Clark; A.L. Abbott
2002-01-01
This work investigates the process of mosaicking overlapping video frames of individual tree stems in sub-canopy scenes captured with a portable multisensor instrument. The robust commercial computer vision systems that are in use today typically rely on precisely controlled conditions. Inconsistent lighting as well as image distortion caused by varying interior and...
A new shock-capturing numerical scheme for ideal hydrodynamics
NASA Astrophysics Data System (ADS)
Fecková, Z.; Tomášik, B.
2015-05-01
We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.
Urban Desolation and Symbolic Denigration in the Hyperghetto
ERIC Educational Resources Information Center
Wacquant, Loic
2010-01-01
The scene of urban desolation and social despair captured by the cover picture of 63rd Street, one of the ghostly thoroughfares transecting Chicago's collapsing black ghetto at century's close, invites everyone to reflect on the link between the built environment, social structure, and collective psychology. More precisely, it points to the need…
NASA Astrophysics Data System (ADS)
Zieschang, H. E.; Sievers, A.
1994-08-01
With the mathematical basis for the precise analysis of developmental processes in plants, the patterns of growth in phototropic and gravitropic responses have become better understood. A detailed temporal and spatial quantification of a growth process is an important tool for evaluating hypotheses about the underlying physiological mechanisms. Studies of growth rates and curvature show that the original Cholodny-Went hypothesis cannot explain the complex growth patterns during tropic responses of shoots and roots. In addition, regulating factors other than the lateral redistribution of hormones must be taken into account. Electrophysiological studies on roots led to a modification of the Cholodny-Went hypothesis in that redistributions of bioelectrical activities are observed.
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Brown, J. W.; Lewis, J. L.
1982-01-01
An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.
Recent Results from the Daya Bay Reactor Neutrino Experiment
NASA Astrophysics Data System (ADS)
Huang, En-Chuan
2016-11-01
The Daya Bay Reactor Neutrino Experiment is designed to precisely measure the mixing parameter sin2 2θ13 via relative measurements with eight functionally identical antineutrino detectors (ADs). In 2012, Daya Bay has first measured a non-zero sin2 2θ13 value with a significance larger than 5σ with the first six ADs. With the installation of two new ADs to complete the full configuration, Daya Bay has continued to increase statistics and lower systematic uncertainties for better precision of sin2 2θ13 and for the exploration of other physics topics. In this proceeding, the latest analysis results of sin2 2θ13 and |Δm 2 ee|, including a measurement made with neutron capture on Gadolinium and an independent measurement made with neutron capture on hydrogen are presented. The latest results of the search for sterile neutrino in the mass splitting range of 10-3 eV2 < |Δm 2 41| < 0.3 eV2 and the absolute measurement of the rate and energy spectrum of reactor antineutrinos will also be presented.
Ma, Xingyi; Sim, Sang Jun
2013-03-21
Even though DNA-based nanosensors have been demonstrated for quantitative detection of analytes and diseases, hybridization events have never been numerically investigated for further understanding of DNA mediated interactions. Here, we developed a nanoscale platform with well-designed capture and detection gold nanoprobes to precisely evaluate the hybridization events. The capture gold nanoprobes were mono-laid on glass and the detection probes were fabricated via a novel competitive conjugation method. The two kinds of probes combined in a suitable orientation following the hybridization with the target. We found that hybridization efficiency was markedly dependent on electrostatic interactions between DNA strands, which can be tailored by adjusting the salt concentration of the incubation solution. Due to the much lower stability of the double helix formed by mismatches, the hybridization efficiencies of single mismatched (MMT) and perfectly matched DNA (PMT) were different. Therefore, we obtained an optimized salt concentration that allowed for discrimination of MMT from PMT without stringent control of temperature or pH. The results indicated this to be an ultrasensitive and precise nanosensor for the diagnosis of genetic diseases.
G-DOC Plus - an integrative bioinformatics platform for precision medicine.
Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha
2016-04-30
G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available at: https://gdoc.georgetown.edu .
Enriching plausible new hypothesis generation in PubMed.
Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min
2017-01-01
Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.
3D reconstruction optimization using imagery captured by unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.
2017-05-01
Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.
Scent Lure Effect on Camera-Trap Based Leopard Density Estimates
Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke
2016-01-01
Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816
High-precision half-life determination for the superallowed β+ emitter Ga62
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Finlay, P.; Svensson, C. E.; Ball, G. C.; Leslie, J. R.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-01-01
The half-life of the superallowed β+ emitter Ga62 has been measured at TRIUMF's Isotope Separator and Accelerator facility using a fast-tape-transport system and 4π continuous-flow gas proportional counter to detect the positrons from the decay of Ga62 to the daughter Zn62. The result, T1/2=116.100±0.025 ms, represents the most precise measurement to date (0.022%) for any superallowed β-decay half-life. When combined with six previous measurements of the Ga62 half-life, a new world average of T1/2=116.121±0.021 ms is obtained. This new half-life measurement results in a 20% improvement in the precision of the Ga62 superallowed ft value while reducing its mean by 0.9σ to ft=3074.3(12) s. The impact of this half-life measurement on precision tests of the CVC hypothesis and isospin symmetry breaking corrections for A⩾62 superallowed decays is discussed.
Unexpected arousal modulates the influence of sensory noise on confidence
Allen, Micah; Frank, Darya; Schwarzkopf, D Samuel; Fardo, Francesca; Winston, Joel S; Hauser, Tobias U; Rees, Geraint
2016-01-01
Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. DOI: http://dx.doi.org/10.7554/eLife.18103.001 PMID:27776633
Nichols, James D.; Hines, James E.; Pollock, Kenneth H.; Hinz, Robert L.; Link, William A.
1994-01-01
The proportion of animals in a population that breeds is an important determinant of population growth rate. Usual estimates of this quantity from field sampling data assume that the probability of appearing in the capture or count statistic is the same for animals that do and do not breed. A similar assumption is required by most existing methods used to test ecologically interesting hypotheses about reproductive costs using field sampling data. However, in many field sampling situations breeding and nonbreeding animals are likely to exhibit different probabilities of being seen or caught. In this paper, we propose the use of multistate capture-recapture models for these estimation and testing problems. This methodology permits a formal test of the hypothesis of equal capture/sighting probabilities for breeding and nonbreeding individuals. Two estimators of breeding proportion (and associated standard errors) are presented, one for the case of equal capture probabilities and one for the case of unequal capture probabilities. The multistate modeling framework also yields formal tests of hypotheses about reproductive costs to future reproduction or survival or both fitness components. The general methodology is illustrated using capture-recapture data on female meadow voles, Microtus pennsylvanicus. Resulting estimates of the proportion of reproductively active females showed strong seasonal variation, as expected, with low breeding proportions in midwinter. We found no evidence of reproductive costs extracted in subsequent survival or reproduction. We believe that this methodological framework has wide application to problems in animal ecology concerning breeding proportions and phenotypic reproductive costs.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Xu, Hongwei; Dong, Biao; Xiao, Qiaoqin; Sun, Xueke; Zhang, Xinran; Lyu, Jiekai; Yang, Yudan; Xu, Lin; Bai, Xue; Zhang, Shuang; Song, Hongwei
2017-09-13
Artificial fractal structures have attracted considerable scientific interest in circulating tumor cells (CTCs) detection and capture, which plays a pivotal role in the diagnosis and prognosis of cancer. Herein, we designed a bionic TiO 2 inverse opal photonic crystal (IOPC) structure for highly efficient immunocapture of CTCs by combination of a magnetic Fe 3 O 4 @C6@silane nanoparticles with anti-EpCAM (antiepithelial cell adhesion molecule) and microchannel structure. Porous structure and dimension of IOPC TiO 2 can be precisely controlled for mimicking cellular components, and anti-EpCAM antibody was further modified on IOPC interface by conjugating with polydopamine (PDA). The improvement of CTCs capture efficiency reaches a surprising factor of 20 for the IOPC interface compared to that on flat glass, suggesting that the IOPCs are responsible for the dramatic enhancement of the capture efficiency of MCF-7 cells. IOPC substrate with pore size of 415 nm leads to the optimal CTCs capture efficiency of 92% with 1 mL/h. Besides the cell affinity, IOPCs also have the advantage of light scattering property which can enhance the excitation and emission light of fluorescence labels, facilitating the real-time monitoring of CTCs capture. The IOPC-based platform demonstrates excellent performance in CTCs capture, which will take an important step toward specific recognition of disease-related rare cells.
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.
Face Value: Towards Robust Estimates of Snow Leopard Densities
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Laser capture microdissection: Arcturus(XT) infrared capture and UV cutting methods.
Gallagher, Rosa I; Blakely, Steven R; Liotta, Lance A; Espina, Virginia
2012-01-01
Laser capture microdissection (LCM) is a technique that allows the precise procurement of enriched cell populations from a heterogeneous tissue under direct microscopic visualization. LCM can be used to harvest the cells of interest directly or can be used to isolate specific cells by ablating the unwanted cells, resulting in histologically enriched cell populations. The fundamental components of laser microdissection technology are (a) visualization of the cells of interest via microscopy, (b) transfer of laser energy to a thermolabile polymer with either the formation of a polymer-cell composite (capture method) or transfer of laser energy via an ultraviolet laser to photovolatize a region of tissue (cutting method), and (c) removal of cells of interest from the heterogeneous tissue section. Laser energy supplied by LCM instruments can be infrared (810 nm) or ultraviolet (355 nm). Infrared lasers melt thermolabile polymers for cell capture, whereas ultraviolet lasers ablate cells for either removal of unwanted cells or excision of a defined area of cells. LCM technology is applicable to an array of applications including mass spectrometry, DNA genotyping and loss-of-heterozygosity analysis, RNA transcript profiling, cDNA library generation, proteomics discovery, and signal kinase pathway profiling. This chapter describes the unique features of the Arcturus(XT) laser capture microdissection instrument, which incorporates both infrared capture and ultraviolet cutting technology in one instrument, using a proteomic downstream assay as a model.
Lamm, Ayelet T; Stadler, Michael R; Zhang, Huibin; Gent, Jonathan I; Fire, Andrew Z
2011-02-01
We have used a combination of three high-throughput RNA capture and sequencing methods to refine and augment the transcriptome map of a well-studied genetic model, Caenorhabditis elegans. The three methods include a standard (non-directional) library preparation protocol relying on cDNA priming and foldback that has been used in several previous studies for transcriptome characterization in this species, and two directional protocols, one involving direct capture of single-stranded RNA fragments and one involving circular-template PCR (CircLigase). We find that each RNA-seq approach shows specific limitations and biases, with the application of multiple methods providing a more complete map than was obtained from any single method. Of particular note in the analysis were substantial advantages of CircLigase-based and ssRNA-based capture for defining sequences and structures of the precise 5' ends (which were lost using the double-strand cDNA capture method). Of the three methods, ssRNA capture was most effective in defining sequences to the poly(A) junction. Using data sets from a spectrum of C. elegans strains and stages and the UCSC Genome Browser, we provide a series of tools, which facilitate rapid visualization and assignment of gene structures.
Keehn, Brandon; Nair, Aarti; Lincoln, Alan J; Townsend, Jeanne; Müller, Ralph-Axel
2016-02-01
For individuals with autism spectrum disorder (ASD), salient behaviorally-relevant information often fails to capture attention, while subtle behaviorally-irrelevant details commonly induce a state of distraction. The present study used functional magnetic resonance imaging (fMRI) to investigate the neurocognitive networks underlying attentional capture in sixteen high-functioning children and adolescents with ASD and twenty-one typically developing (TD) individuals. Participants completed a rapid serial visual presentation paradigm designed to investigate activation of attentional networks to behaviorally-relevant targets and contingent attention capture by task-irrelevant distractors. In individuals with ASD, target stimuli failed to trigger bottom-up activation of the ventral attentional network and the cerebellum. Additionally, the ASD group showed no differences in behavior or occipital activation associated with contingent attentional capture. Rather, results suggest that to-be-ignored distractors that shared either task-relevant or irrelevant features captured attention in ASD. Results indicate that individuals with ASD may be under-reactive to behaviorally-relevant stimuli, unable to filter irrelevant information, and that both top-down and bottom-up attention networks function atypically in ASD. Lastly, deficits in target-related processing were associated with autism symptomatology, providing further support for the hypothesis that non-social attentional processes and their neurofunctional underpinnings may play a significant role in the development of sociocommunicative impairments in ASD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
a High-Precision Branching-Ratio Measurement for the Superallowed β+ Emitter 74Rb
NASA Astrophysics Data System (ADS)
Dunlop, R.; Chagnon-Lessard, S.; Finlay, P.; Garrett, P. E.; Hadinia, B.; Leach, K. G.; Svensson, C. E.; Wong, J.; Ball, G.; Garnsworthy, A. B.; Glister, J.; Hackman, G.; Tardiff, E. R.; Triambak, S.; Williams, S. J.; Leslie, J. R.; Andreoiu, C.; Chester, A.; Cross, D.; Starosta, K.; Yates, S. W.; Zganjar, E. F.
2013-03-01
Precision measurements of superallowed Fermi beta decay allow for tests of the Cabibbo-Kobayashi-Maskawa matrix (CKM) unitarity, the conserved vector current hypothesis, and the magnitude of isospin-symmetry-breaking effects in nuclei. A high-precision measurement of the branching ratio for the β+ decay of 74Rb has been performed at the Isotope Separator and ACcelerator (ISAC) facility at TRIUMF. The 8π spectrometer, an array of 20 close-packed HPGe detectors, was used to detect gamma rays emitted following the decay of 74Rb. PACES, an array of 5 Si(Li) detectors, was used to detect emitted conversion electrons, while SCEPTAR, an array of plastic scintillators, was used to detect emitted beta particles. A total of 51γ rays have been identified following the decay of 21 excited states in the daughter nucleus 74Kr.
Neutron activation analyses and half-life measurements at the usgs triga reactor
NASA Astrophysics Data System (ADS)
Larson, Robert E.
Neutron activation of materials followed by gamma spectroscopy using high-purity germanium detectors is an effective method for making measurements of nuclear beta decay half-lives and for detecting trace amounts of elements present in materials. This research explores applications of neutron activation analysis (NAA) in two parts. Part 1. High Precision Methods for Measuring Decay Half-Lives, Chapters 1 through 8 Part one develops research methods and data analysis techniques for making high precision measurements of nuclear beta decay half-lives. The change in the electron capture half-life of 51Cr in pure chromium versus chromium mixed in a gold lattice structure is explored, and the 97Ru electron capture decay half-life are compared for ruthenium in a pure crystal versus ruthenium in a rutile oxide state, RuO2. In addition, the beta-minus decay half-life of 71mZn is measured and compared with new high precision findings. Density Functional Theory is used to explain the measured magnitude of changes in electron capture half-life from changes in the surrounding lattice electron configuration. Part 2. Debris Collection Nuclear Diagnostic at the National Ignition Facility, Chapters 9 through 11 Part two explores the design and development of a solid debris collector for use as a diagnostic tool at the National Ignition Facility (NIF). NAA measurements are performed on NIF post-shot debris collected on witness plates in the NIF chamber. In this application NAA is used to detect and quantify the amount of trace amounts of gold from the hohlraum and germanium from the pellet present in the debris collected after a NIF shot. The design of a solid debris collector based on material x-ray ablation properties is given, and calculations are done to predict performance and results for the collection and measurements of trace amounts of gold and germanium from dissociated hohlraum debris.
A High-Resolution, Three-Dimensional Model of Jupiter's Great Red Spot
NASA Technical Reports Server (NTRS)
Cho, James Y.-K.; delaTorreJuarez, Manuel; Ingersoll, Andrew P.; Dritschel, David G.
2001-01-01
The turbulent flow at the periphery of the Great Red Spot (GRS) contains many fine-scale filamentary structures, while the more quiescent core, bounded by a narrow high- velocity ring, exhibits organized, possibly counterrotating, motion. Past studies have neither been able to capture this complexity nor adequately study the effect of vertical stratification L(sub R)(zeta) on the GRS. We present results from a series of high-resolution, three-dimensional simulations that advect the dynamical tracer, potential vorticity. The detailed flow is successfully captured with a characteristic value of L(sub R) approx. equals 2000 km, independent of the precise vertical stratification profile.
NASA Astrophysics Data System (ADS)
Merhej, M.; Honegger, T.; Bassani, F.; Baron, T.; Peyrade, D.; Drouin, D.; Salem, B.
2018-01-01
The assembly of semiconductor nanowires with nanoscale precision is crucial for their integration into functional systems. In this work, we propose a novel method to experimentally determine the real part of the Clausius-Mossotti factor (CMF) of silicon and silicon-germanium nanowires. The quantification of this CMF is measured with the nanowires velocities in a pure dielectrophoretic regime. This approach combined with a study on the connected nanowires alignment yield has led to a frequency of capture evaluation. In addition, we have also presented the morphology of nanowires assembly using dielectrophoresis for a wide frequency variation of AC electric fields.
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Tourassi, Georgia D.
2011-03-01
In this study we investigate the hypothesis that there exist patterns in erroneous assessment of BI-RADS image features among radiology trainees when performing diagnostic interpretation of mammograms. We also investigate whether these error making patterns can be captured by individual user models. To test our hypothesis we propose a user modeling algorithm that uses the previous readings of a trainee to identify whether certain BI-RADS feature values (e.g. "spiculated" value for "margin" feature) are associated with higher than usual likelihood that the feature will be assessed incorrectly. In our experiments we used readings of 3 radiology residents and 7 breast imaging experts for 33 breast masses for the following BI-RADS features: parenchyma density, mass margin, mass shape and mass density. The expert readings were considered as the gold standard. Rule-based individual user models were developed and tested using the leave one-one-out crossvalidation scheme. Our experimental evaluation showed that the individual user models are accurate in identifying cases for which errors are more likely to be made. The user models captured regularities in error making for all 3 residents. This finding supports our hypothesis about existence of individual error making patterns in assessment of mammographic image features using the BI-RADS lexicon. Explicit user models identifying the weaknesses of each resident could be of great use when developing and adapting a personalized training plan to meet the resident's individual needs. Such approach fits well with the framework of adaptive computer-aided educational systems in mammography we have proposed before.
Muon capture on light isotopes in Double Chooz
NASA Astrophysics Data System (ADS)
Strait, M.; Double Chooz Collaboration
2017-09-01
Using the Double Chooz reactor neutrino detector, we have measured the products of µ - capture on 12C, 13C, 14N and 16O. Over a period of 490 days, we collected 2.3 × 106 stopping cosmic µ -, of which 1.8 × 105 captured on these nuclei in the inner detector. The resulting isotopes were tagged using prompt neutron emission (when applicable), the subsequent beta decays, and, in some cases, β-delayed neutrons. Production of these βn isotopes, primarily 9Li, which are {{{ν _e}} \\over {{ν _μ }}} backgrounds, was found at a significance of 5.5σ. The probability of 9Li production per capture on natC is (2.4 ± 0.9(stat) ± 0.1(syst)) × 10-4. We have made the most precise measurement of the rate of 12C(µ -, ν)12B to date, 6.57 - 0.21 + 0.11 × {10^3}{{ }}{{{s}} - 1},{{ or }}≤ft( {17.35 - 0.59 + 0.35} \\right)% of nuclear captures. By tagging excited states emitting gammas, the ground state transition rate to 12B is found to be 5.68 - 0.23 + 0.14 × {10^3}{{ }}{{{s}} - 1}.
PROSPECT - A precision oscillation and spectrum experiment
NASA Astrophysics Data System (ADS)
Langford, T. J.; PROSPECT Collaboration
2015-08-01
Segmented antineutrino detectors placed near a compact research reactor provide an excellent opportunity to probe short-baseline neutrino oscillations and precisely measure the reactor antineutrino spectrum. Close proximity to a reactor combined with minimal overburden yield a high background environment that must be managed through shielding and detector technology. PROSPECT is a new experimental effort to detect reactor antineutrinos from the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory, managed by UT Battelle for the U.S. Department of Energy. The detector will use novel lithium-loaded liquid scintillator capable of neutron/gamma pulse shape discrimination and neutron capture tagging. These enhancements improve the ability to identify neutrino inverse-beta decays (IBD) and reject background events in analysis. Results from these efforts will be covered along with their implications for an oscillation search and a precision spectrum measurement.
NASA Technical Reports Server (NTRS)
Schroeder, J. A.; Merrick, V. K.
1990-01-01
Several control and display concepts were evaluated on a variable-stability helicopter prior to future evaluations on a modified Harrier. The control and display concepts had been developed to enable precise hover maneuvers, station keeping, and vertical landings in simulated zero-visibility conditions and had been evaluated extensively in previous piloted simulations. Flight evaluations early in the program revealed several inadequacies in the display drive laws that were later corrected using an alternative design approach that integrated the control and display characteristics with the desired guidance law. While hooded, three pilots performed landing-pad captures followed by vertical landings with attitude-rate, attitude, and translation-velocity-command control systems. The latter control system incorporated a modified version of state-rate-feedback implicit-model following. Precise landing within 2 ft of the desired touchdown point were achieved.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
Influence of dreissenid mussels on catchability of benthic fishes in bottom trawls
Kocovsky, Patrick M.; Stapanian, Martin A.
2011-01-01
Inferring trends in true abundance of fish populations from catch per unit effort data requires either the knowledge of capture probability or the assumption that it is constant, both of which are unlikely contingencies. We developed and validated an index of catchability (a proxy measure for capture probability) from a long-term data set describing nearshore waters of western Lake Erie, and we used the index to test the hypothesis that catchability of four abundant benthic species captured in bottom trawls changed after the invasion of dreissenid mussels. We estimated daytime and nighttime catchability for 1972–1990 (predreissenid period) and 1991–2009 (dreissenid period); we then tested for differences between nighttime and daytime catchability in the predreissenid and dreissenid periods and the nighttime–daytime differential in catchability during the dreissenid period. We also tested relationships between Secchi depth and the catchability index via linear regression. Catchability indices for white perch Morone americana, yellow perch Perca flavescens, and trout-perch Percopsis omiscomaycus did not differ between daytime and nighttime during the predreissenid period. After establishment of dreissenids, all three of these species had lower daytime catchability than nighttime catchability and had positive nighttime–daytime differentials, indicating a shift toward higher nighttime catchability relative to daytime catchability. Changes in catchability indices for freshwater drum Aplodinotus grunniens were opposite the changes observed for the other three species, possibly because the freshwater drum is the only species that actively feeds on dreissenids. Catchability indices were negatively related to water clarity (Secchi depth) for three of the species. Our results are consistent with the hypothesis that catchability of the four most common benthic fish species captured in bottom trawls within nearshore waters of western Lake Erie changed after the dreissenid invasion because of increased water clarity and increased visibility, which led to greater daytime trawl avoidance.
NASA Astrophysics Data System (ADS)
Žáček, P.; Wolf, M.
2017-10-01
This paper contains necessary modification of Bessel's equations for the axial cometary syndyne. This correction provides the accurate values of molecular acceleration in a cometary tail and precise values of decay constants for radiating molecules and their lifetimes. In consequence the hypothesis of the predissociation of molecules seems to be useless.
NASA Astrophysics Data System (ADS)
Kihara, Naoto; Odaka, Hidefumi; Kuboyama, Daiki; Onoshima, Daisuke; Ishikawa, Kenji; Baba, Yoshinobu; Hori, Masaru
2018-03-01
Although membrane filters are indispensable in biochemical analysis fields, most methods for through-hole fabrication are complex and inefficient. We developed a simple method of fabricating poly(ethylene terephthalate) (PET) membrane filters with a precise arrangement of through-holes for the isolation of circulating tumor cells (CTCs) based on their size. By photolithography and dry etching, highly packed 380,000 through-holes with a diameter of 7 µm were able to cover a whole area with a diameter of 13 mm. Device fabrication for the size-based capture of rare cells in blood such as CTCs is realized in this study.
Mechanically based generative laws of morphogenesis
NASA Astrophysics Data System (ADS)
Beloussov, Lev V.
2008-03-01
A deep (although at the first glance naïve) question which may be addressed to embryonic development is why during this process quite definite and accurately reproduced successions of precise and complicated shapes are taking place, or why, in several cases, the result of development is highly precise in spite of an extensive variability of intermediate stages. This problem can be attacked in two different ways. One of them, up to now just slightly employed, is to formulate robust macroscopic generative laws from which the observed successions of shapes could be derived. Another one, which dominates in modern embryology, regards the development as a succession of highly precise 'micropatterns', each of them arising due to the action of specific factors, having, as a rule, nothing in common with each other. We argue that the latter view contradicts a great bulk of firmly established data and gives no satisfactory answers to the main problems of development. Therefore we intend to follow the first way. By doing this, we regard developing embryos as self-organized systems transpierced by feedbacks among which we pay special attention to those linked with mechanical stresses (MS). We formulate a hypothesis of so-called MS hyper-restoration as a common basis for the developmentally important feedback loops. We present a number of examples confirming this hypothesis and use it for reconstructing prolonged chains of developmental events. Finally, we discuss the application of the same set of assumptions to the first steps of egg development and to the internal differentiation of embryonic cells.
ERIC Educational Resources Information Center
Petitto, Laura Ann; Holowka, Siobhan; Sergio, Lauren E.; Levy, Bronna; Ostry, David J.
2004-01-01
The ''ba, ba, ba'' sound universal to babies' babbling around 7 months captures scientific attention because it provides insights into the mechanisms underlying language acquisition and vestiges of its evolutionary origins. Yet the prevailing mystery is what is the biological basis of babbling, with one hypothesis being that it is a non-linguistic…
Capturing the Full Potential of the Synthetic Theater Operations Research Model (STORM)
2014-09-01
Lucas Thesis Co-Advisor Dashi I. Singham Thesis Co-Advisor Rachel Silvestrini Second Reader Robert F. Dell Chair, Department of...significance for which the observed data indicates that the null hypothesis should be rejected (Wackerly, Mendenhall III, & Scheaffer, 2008. The vast...pdf Wackerly, D., Mendenhall III, W., & Scheaffer, R. (2008). Mathematical statistics with applications. Belmont, CA: Brooks/Cole. 75 INITIAL
1988-04-01
cooperated and coordinated their activities in absolute precision cieated by total nental telepathy . Although XIX Tactice.1 Air Coeeand and Third Aray did...capture of the Romanian oil fields and increased production of synthetic oil, Germany produced enough oil to meet her military needs. By 1944, the
A test of the social cohesion hypothesis: interactive female marmots remain at home.
Blumstein, Daniel T; Wey, Tina W; Tang, Karisa
2009-08-22
Individuals frequently leave home before reaching reproductive age, but the proximate causes of natal dispersal remain relatively unknown. The social cohesion hypothesis predicts that individuals who engage in more (affiliative) interactions are less likely to disperse. Despite the intuitive nature of this hypothesis, support is both limited and equivocal. We used formal social network analyses to quantify precisely both direct and indirect measures of social cohesion in yellow-bellied marmots. Because approximately 50 per cent of female yearlings disperse, we expected that social relationships and network measures of cohesion would predict dispersal. By contrast, because most male yearlings disperse, we expected that social relationships and cohesion would play a less important role. We found that female yearlings that interacted with more individuals, and those that were more socially embedded in their groups, were less likely to disperse. For males, social interactions were relatively unimportant determinants of dispersal. This is the first strong support for the social cohesion hypothesis and suggests that the specific nature of social relationships, not simply the number of affiliative relationships, may influence the propensity to disperse.
A test of the social cohesion hypothesis: interactive female marmots remain at home
Blumstein, Daniel T.; Wey, Tina W.; Tang, Karisa
2009-01-01
Individuals frequently leave home before reaching reproductive age, but the proximate causes of natal dispersal remain relatively unknown. The social cohesion hypothesis predicts that individuals who engage in more (affiliative) interactions are less likely to disperse. Despite the intuitive nature of this hypothesis, support is both limited and equivocal. We used formal social network analyses to quantify precisely both direct and indirect measures of social cohesion in yellow-bellied marmots. Because approximately 50 per cent of female yearlings disperse, we expected that social relationships and network measures of cohesion would predict dispersal. By contrast, because most male yearlings disperse, we expected that social relationships and cohesion would play a less important role. We found that female yearlings that interacted with more individuals, and those that were more socially embedded in their groups, were less likely to disperse. For males, social interactions were relatively unimportant determinants of dispersal. This is the first strong support for the social cohesion hypothesis and suggests that the specific nature of social relationships, not simply the number of affiliative relationships, may influence the propensity to disperse. PMID:19493901
Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.
Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen
2017-11-23
Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.
DENSITY: software for analysing capture-recapture data from passive detector arrays
Efford, M.G.; Dawson, D.K.; Robbins, C.S.
2004-01-01
A general computer-intensive method is described for fitting spatial detection functions to capture-recapture data from arrays of passive detectors such as live traps and mist nets. The method is used to estimate the population density of 10 species of breeding birds sampled by mist-netting in deciduous forest at Patuxent Research Refuge, Laurel, Maryland, U.S.A., from 1961 to 1972. Total density (9.9 ? 0.6 ha-1 mean ? SE) appeared to decline over time (slope -0.41 ? 0.15 ha-1y-1). The mean precision of annual estimates for all 10 species pooled was acceptable (CV(D) = 14%). Spatial analysis of closed-population capture-recapture data highlighted deficiencies in non-spatial methodologies. For example, effective trapping area cannot be assumed constant when detection probability is variable. Simulation may be used to evaluate alternative designs for mist net arrays where density estimation is a study goal.
Compact and controlled microfluidic mixing and biological particle capture
NASA Astrophysics Data System (ADS)
Ballard, Matthew; Owen, Drew; Mills, Zachary Grant; Hesketh, Peter J.; Alexeev, Alexander
2016-11-01
We use three-dimensional simulations and experiments to develop a multifunctional microfluidic device that performs rapid and controllable microfluidic mixing and specific particle capture. Our device uses a compact microfluidic channel decorated with magnetic features. A rotating magnetic field precisely controls individual magnetic microbeads orbiting around the features, enabling effective continuous-flow mixing of fluid streams over a compact mixing region. We use computer simulations to elucidate the underlying physical mechanisms that lead to effective mixing and compare them with experimental mixing results. We study the effect of various system parameters on microfluidic mixing to design an efficient micromixer. We also experimentally and numerically demonstrate that orbiting microbeads can effectively capture particles transported by the fluid, which has major implications in pre-concentration and detection of biological particles including various cells and bacteria, with applications in areas such as point-of-care diagnostics, biohazard detection, and food safety. Support from NSF and USDA is gratefully acknowledged.
Clark, J H D; Armstrong, D S; Gorringe, T P; Hasinoff, M D; King, P M; Stocki, T J; Tripathi, S; Wright, D H; Zolnierczuk, P A
2006-02-24
We report a measurement of the ortho-para transition rate in the p mu p molecule. The experiment was conducted at TRIUMF via the measurement of the time dependence of the 5.2 MeV neutrons from muon capture in liquid hydrogen. The measurement yielded an ortho-para rate Lambda op = (11.1 +/- 1.7 +/-(0.9)(0.6)) x 10(4) s(-1), which is substantially larger than the earlier result of Bardin et al. The result has striking implications for the proton's induced pseudoscalar coupling g(p), changing the value of g(p) obtained from the most precise ordinary muon capture measurement from 10.6 +/- 2.7 to 0.8 +/- 2.8, and from the sole radiative muon capture measurement from 12.2 +/- 1.1 to 10.6 +/- 1.2, bringing the latter result closer to theoretical predictions.
The gait standard deviation, a single measure of kinematic variability.
Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren
2016-05-01
Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.
Fast and precise dense grid size measurement method based on coaxial dual optical imaging system
NASA Astrophysics Data System (ADS)
Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei
2015-10-01
Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.
Minimalist approach to the classification of symmetry protected topological phases
NASA Astrophysics Data System (ADS)
Xiong, Zhaoxi
A number of proposals with differing predictions (e.g. group cohomology, cobordisms, group supercohomology, spin cobordisms, etc.) have been made for the classification of symmetry protected topological (SPT) phases. Here we treat various proposals on equal footing and present rigorous, general results that are independent of which proposal is correct. We do so by formulating a minimalist Generalized Cohomology Hypothesis, which is satisfied by existing proposals and captures essential aspects of SPT classification. From this Hypothesis alone, formulas relating classifications in different dimensions and/or protected by different symmetry groups are derived. Our formalism is expected to work for fermionic as well as bosonic phases, Floquet as well as stationary phases, and spatial as well as on-site symmetries.
Bacteria facilitate prey retention by the pitcher plant Darlingtonia californica
2016-01-01
Bacteria are hypothesized to provide a variety of beneficial functions to plants. Many carnivorous pitcher plants, for example, rely on bacteria for digestion of captured prey. This bacterial community may also be responsible for the low surface tensions commonly observed in pitcher plant digestive fluids, which might facilitate prey capture. I tested this hypothesis by comparing the physical properties of natural pitcher fluid from the pitcher plant Darlingtonia californica and cultured ‘artificial’ pitcher fluids and tested these fluids' prey retention capabilities. I found that cultures of pitcher leaves' bacterial communities had similar physical properties to raw pitcher fluids. These properties facilitated the retention of insects by both fluids and hint at a previously undescribed class of plant–microbe interaction. PMID:27881762
The use of light in prey capture by the tropical pitcher plant Nepenthes aristolochioides.
Moran, Jonathan A; Clarke, Charles; Gowen, Brent E
2012-08-01
Nepenthes pitcher plants deploy tube-shaped pitchers to catch invertebrate prey; those of Nepenthes aristolochioides possess an unusual translucent dome. The hypothesis was tested that N. aristolochioides pitchers operate as light traps, by quantifying prey capture under three shade treatments. Flies are red-blind, with visual sensitivity maxima in the UV, blue, and green wavebands. Red celluloid filters were used to reduce the transmission of these wavebands into the interior of the pitchers. Those that were shaded at the rear showed a 3-fold reduction in Drosophila caught, relative to either unshaded control pitchers, or pitchers that were shaded at the front. Thus, light transmitted through the translucent dome is a fundamental component of N. aristolochioides' trapping mechanism.
Bacteria facilitate prey retention by the pitcher plant Darlingtonia californica.
Armitage, David W
2016-11-01
Bacteria are hypothesized to provide a variety of beneficial functions to plants. Many carnivorous pitcher plants, for example, rely on bacteria for digestion of captured prey. This bacterial community may also be responsible for the low surface tensions commonly observed in pitcher plant digestive fluids, which might facilitate prey capture. I tested this hypothesis by comparing the physical properties of natural pitcher fluid from the pitcher plant Darlingtonia californica and cultured 'artificial' pitcher fluids and tested these fluids' prey retention capabilities. I found that cultures of pitcher leaves' bacterial communities had similar physical properties to raw pitcher fluids. These properties facilitated the retention of insects by both fluids and hint at a previously undescribed class of plant-microbe interaction. © 2016 The Author(s).
The use of light in prey capture by the tropical pitcher plant Nepenthes aristolochioides
Moran, Jonathan A.; Clarke, Charles; Gowen, Brent E.
2012-01-01
Nepenthes pitcher plants deploy tube-shaped pitchers to catch invertebrate prey; those of Nepenthes aristolochioides possess an unusual translucent dome. The hypothesis was tested that N. aristolochioides pitchers operate as light traps, by quantifying prey capture under three shade treatments. Flies are red-blind, with visual sensitivity maxima in the UV, blue, and green wavebands. Red celluloid filters were used to reduce the transmission of these wavebands into the interior of the pitchers. Those that were shaded at the rear showed a 3-fold reduction in Drosophila caught, relative to either unshaded control pitchers, or pitchers that were shaded at the front. Thus, light transmitted through the translucent dome is a fundamental component of N. aristolochioides' trapping mechanism. PMID:22836498
Toward Precision Healthcare: Context and Mathematical Challenges
Colijn, Caroline; Jones, Nick; Johnston, Iain G.; Yaliraki, Sophia; Barahona, Mauricio
2017-01-01
Precision medicine refers to the idea of delivering the right treatment to the right patient at the right time, usually with a focus on a data-centered approach to this task. In this perspective piece, we use the term “precision healthcare” to describe the development of precision approaches that bridge from the individual to the population, taking advantage of individual-level data, but also taking the social context into account. These problems give rise to a broad spectrum of technical, scientific, policy, ethical and social challenges, and new mathematical techniques will be required to meet them. To ensure that the science underpinning “precision” is robust, interpretable and well-suited to meet the policy, ethical and social questions that such approaches raise, the mathematical methods for data analysis should be transparent, robust, and able to adapt to errors and uncertainties. In particular, precision methodologies should capture the complexity of data, yet produce tractable descriptions at the relevant resolution while preserving intelligibility and traceability, so that they can be used by practitioners to aid decision-making. Through several case studies in this domain of precision healthcare, we argue that this vision requires the development of new mathematical frameworks, both in modeling and in data analysis and interpretation. PMID:28377724
High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
Current and Future Research at DANCE
NASA Astrophysics Data System (ADS)
Jandel, M.; Baramsai, B.; Bredeweg, T. A.; Couture, A.; Hayes, A.; Kawano, T.; Mosby, S.; Rusev, G.; Stetcu, I.; Taddeucci, T. N.; Talou, P.; Ullmann, J. L.; Walker, C. L.; Wilhelmy, J. B.
2015-05-01
An overview of the current experimental program on measurements of neutron capture and neutron induced fission at the
Nanomaterial-based Microfluidic Chips for the Capture and Detection of Circulating Tumor Cells.
Sun, Duanping; Chen, Zuanguang; Wu, Minhao; Zhang, Yuanqing
2017-01-01
Circulating tumor cells (CTCs), a type of cancer cells that spreads from primary or metastatic tumors into the bloodstream, can lead to a new fatal metastasis. As a new type of liquid biopsy, CTCs have become a hot pursuit and detection of CTCs offers the possibility for early diagnosis of cancers, earlier evaluation of chemotherapeutic efficacy and cancer recurrence, and choice of individual sensitive anti-cancer drugs. The fundamental challenges of capturing and characterizing CTCs are the extremely low number of CTCs in the blood and the intrinsic heterogeneity of CTCs. A series of microfluidic devices have been proposed for the analysis of CTCs with automation capability, precise flow behaviors, and significant advantages over the conventional larger scale systems. This review aims to provide in-depth insights into CTCs analysis, including various nanomaterial-based microfluidic chips for the capture and detection of CTCs based on the specific biochemical and physical properties of CTCs. The current developmental trends and promising research directions in the establishment of microfluidic chips for the capture and detection of CTCs are also discussed.
Grasp posture alters visual processing biases near the hands
Thomas, Laura E.
2015-01-01
Observers experience biases in visual processing for objects within easy reach of their hands that may assist them in evaluating items that are candidates for action. I investigated the hypothesis that hand postures affording different types of actions differentially bias vision. Across three experiments, participants performed global motion detection and global form perception tasks while their hands were positioned a) near the display in a posture affording a power grasp, b) near the display in a posture affording a precision grasp, or c) in their laps. Although the power grasp posture facilitated performance on the motion task, the precision grasp posture instead facilitated performance on the form task. These results suggest that the visual system weights processing based on an observer’s current affordances for specific actions: fast and forceful power grasps enhance temporal sensitivity, while detail-oriented precision grasps enhance spatial sensitivity. PMID:25862545
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli
Kamke, Marc R.; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.
Kamke, Marc R; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.
Contingent capture effects in temporal order judgments.
Born, Sabine; Kerzel, Dirk; Pratt, Jay
2015-08-01
The contingent attentional capture hypothesis proposes that visual stimuli that do not possess characteristics relevant for the current task will not capture attention, irrespective of their bottom-up saliency. Typically, contingent capture is tested in a spatial cuing paradigm, comparing manual reaction times (RTs) across different conditions. However, attention may act through several mechanisms and RTs may not be ideal to disentangle those different components. In 3 experiments, we examined whether color singleton cues provoke cuing effects in temporal order judgments (TOJs) and whether they would be contingent on attentional control sets. Experiment 1 showed that color singleton cues indeed produce cuing effects in TOJs, even in a cluttered and dynamic target display containing multiple heterogeneous distractors. In Experiment 2, consistent with contingent capture, we observed reliable cuing effects only when the singleton cue matched participants' current attentional control set. Experiment 3 suggests that a sensory interaction account of the differences found in Experiment 2 is unlikely. Our results help to discern the attentional components that may play a role in contingent capture. Further, we discuss a number of other effects (e.g., reversed cuing effects) that are found in RTs, but so far have not been reported in TOJs. Those differences suggest that RTs are influenced by a multitude of mechanisms; however, not all of these mechanisms may affect TOJs. We conclude by highlighting how the study of attentional capture in TOJs provides valuable insights for the attention literature, but also for studies concerned with the perceived timing between stimuli. (c) 2015 APA, all rights reserved).
Suppression of overt attentional capture by salient-but-irrelevant color singletons.
Gaspelin, Nicholas; Leonard, Carly J; Luck, Steven J
2017-01-01
For more than 2 decades, researchers have debated the nature of cognitive control in the guidance of visual attention. Stimulus-driven theories claim that salient stimuli automatically capture attention, whereas goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. In the current study, we tested a hybrid account called the signal suppression hypothesis, which claims that all stimuli automatically generate a salience signal but that this signal can be actively suppressed by top-down attentional mechanisms. Previous behavioral and electrophysiological research has shown that participants can suppress covert shifts of attention to salient-but-irrelevant color singletons. In this study, we used eye-tracking methods to determine whether participants can also suppress overt shifts of attention to irrelevant singletons. We found that under conditions that promote active suppression of the irrelevant singletons, overt attention was less likely to be directed toward the salient distractors than toward nonsalient distractors. This result provides direct evidence that people can suppress salient-but-irrelevant singletons below baseline levels.
NASA Astrophysics Data System (ADS)
Calderon, Christopher P.; Weiss, Lucien E.; Moerner, W. E.
2014-05-01
Experimental advances have improved the two- (2D) and three-dimensional (3D) spatial resolution that can be extracted from in vivo single-molecule measurements. This enables researchers to quantitatively infer the magnitude and directionality of forces experienced by biomolecules in their native environment. Situations where such force information is relevant range from mitosis to directed transport of protein cargo along cytoskeletal structures. Models commonly applied to quantify single-molecule dynamics assume that effective forces and velocity in the x ,y (or x ,y,z) directions are statistically independent, but this assumption is physically unrealistic in many situations. We present a hypothesis testing approach capable of determining if there is evidence of statistical dependence between positional coordinates in experimentally measured trajectories; if the hypothesis of independence between spatial coordinates is rejected, then a new model accounting for 2D (3D) interactions can and should be considered. Our hypothesis testing technique is robust, meaning it can detect interactions, even if the noise statistics are not well captured by the model. The approach is demonstrated on control simulations and on experimental data (directed transport of intraflagellar transport protein 88 homolog in the primary cilium).
Systematic R -matrix analysis of the 13C(p ,γ )14N capture reaction
NASA Astrophysics Data System (ADS)
Chakraborty, Suprita; deBoer, Richard; Mukherjee, Avijit; Roy, Subinit
2015-04-01
Background: The proton capture reaction 13C(p ,γ )14N is an important reaction in the CNO cycle during hydrogen burning in stars with mass greater than the mass of the Sun. It also occurs in astrophysical sites such as red giant stars: the asymptotic giant branch (AGB) stars. The low energy astrophysical S factor of this reaction is dominated by a resonance state at an excitation energy of around 8.06 MeV (Jπ=1-,T =1 ) in 14N. The other significant contributions come from the low energy tail of the broad resonance with Jπ=0-,T =1 at an excitation of 8.78 MeV and the direct capture process. Purpose: Measurements of the low energy astrophysical S factor of the radiative capture reaction 13C(p ,γ )14N reported extrapolated values of S (0 ) that differ by about 30 % . Subsequent R -matrix analysis and potential model calculations also yielded significantly different values for S (0 ) . The present work intends to look into the discrepancy through a detailed R -matrix analysis with emphasis on the associated uncertainties. Method: A systematic reanalysis of the available decay data following the capture to the Jπ=1-,T =1 resonance state of 14N around 8.06 MeV excitation had been performed within the framework of the R -matrix method. A simultaneous analysis of the 13C(p ,p0 ) data, measured over a similar energy range, was carried out with the capture data. The data for the ground state decay of the broad resonance state (Jπ=0-,T =1 ) around 8.78 MeV excitations was included as well. The external capture model along with the background poles to simulate the internal capture contribution were used to estimate the direct capture contribution. The asymptotic normalization constants (ANCs) for all states were extracted from the capture data. The multichannel, multilevel R -matrix code azure2 was used for the calculation. Results: The values of the astrophysical S factor at zero relative energy, resulting from the present analysis, are found to be consistent within the error bars for the two sets of capture data used. However, it is found from the fits to the elastic scattering data that the position of the Jπ=1-,T =1 resonance state is uncertain by about 0.6 keV, preferring an excitation energy value of 8.062 MeV. Also the extracted ANC values for the states of 14N corroborate the values from the transfer reaction studies. The reaction rates from the present calculation are about 10 -15 % lower than the values of the NACRE II compilation but compare well with those from NACRE I. Conclusion: The precise energy of the Jπ=1-,T =1 resonance level around 8.06 MeV in 14N must be determined. Further measurements around and below 100 keV with precision are necessary to reduce the uncertainty in the S -factor value at zero relative energy.
Multiple hypothesis tracking for the cyber domain
NASA Astrophysics Data System (ADS)
Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.
2011-09-01
This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.
Big Data’s Role in Precision Public Health
Dolley, Shawn
2018-01-01
Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091
Big Data's Role in Precision Public Health.
Dolley, Shawn
2018-01-01
Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.
USDA-ARS?s Scientific Manuscript database
The accuracy and precision of the Horsfall-Barratt (H-B) scale has been questioned, and some of the psychophysical law on which it is based found to be inappropriate. It has not been demonstrated whether use of the H-B scale systematically affects the outcome of hypothesis testing. A simulation mode...
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Computational Nosology and Precision Psychiatry
Redish, A. David; Gordon, Joshua A.
2017-01-01
This article provides an illustrative treatment of psychiatric morbidity that offers an alternative to the standard nosological model in psychiatry. It considers what would happen if we treated diagnostic categories not as causes of signs and symptoms, but as diagnostic consequences of psychopathology and pathophysiology. This reformulation (of the standard nosological model) opens the door to a more natural description of how patients present—and of their likely responses to therapeutic interventions. In brief, we describe a model that generates symptoms, signs, and diagnostic outcomes from latent psychopathological states. In turn, psychopathology is caused by pathophysiological processes that are perturbed by (etiological) causes such as predisposing factors, life events, and therapeutic interventions. The key advantages of this nosological formulation include (i) the formal integration of diagnostic (e.g., DSM) categories and latent psychopathological constructs (e.g., the dimensions of the Research Domain Criteria); (ii) the provision of a hypothesis or model space that accommodates formal, evidence-based hypothesis testing (using Bayesian model comparison); and (iii) the ability to predict therapeutic responses (using a posterior predictive density), as in precision medicine. These and other advantages are largely promissory at present: The purpose of this article is to show what might be possible, through the use of idealized simulations. PMID:29400354
Rare targets are less susceptible to attention capture once detection has begun.
Hon, Nicholas; Ng, Gavin; Chan, Gerald
2016-04-01
Rare or low probability targets are detected more slowly and/ or less accurately than higher probability counterparts. Various proposals have implicated perceptual and response-based processes in this deficit. Recent evidence, however, suggests that it is attentional in nature, with low probability targets requiring more attentional resources than high probability ones to detect. This difference in attentional requirements, in turn, suggests the possibility that low and high probability targets may have different susceptibilities to attention capture, which is also known to be resource-dependent. Supporting this hypothesis, we found that, once attentional resources have begun to be engaged by detection processes, low, but not high, probability targets have a reduced susceptibility to capture. Our findings speak to several issues. First, they indicate that the likelihood of attention capture occurring when a given task-relevant stimulus is being processed is dependent, to some extent, on how said stimulus is represented within mental task sets. Second, they provide added support for the idea that the behavioural deficit associated with low probability targets is attention-based. Finally, the current data point to reduced top-down biasing of target templates as a likely mechanism underlying the attentional locus of the deficit in question.
Distinguishing Provenance Equivalence of Earth Science Data
NASA Technical Reports Server (NTRS)
Tilmes, Curt; Yesha, Ye; Halem, M.
2010-01-01
Reproducibility of scientific research relies on accurate and precise citation of data and the provenance of that data. Earth science data are often the result of applying complex data transformation and analysis workflows to vast quantities of data. Provenance information of data processing is used for a variety of purposes, including understanding the process and auditing as well as reproducibility. Certain provenance information is essential for producing scientifically equivalent data. Capturing and representing that provenance information and assigning identifiers suitable for precisely distinguishing data granules and datasets is needed for accurate comparisons. This paper discusses scientific equivalence and essential provenance for scientific reproducibility. We use the example of an operational earth science data processing system to illustrate the application of the technique of cascading digital signatures or hash chains to precisely identify sets of granules and as provenance equivalence identifiers to distinguish data made in an an equivalent manner.
Zhang, Boyang; Huang, Kunlun; Zhu, Liye; Luo, Yunbo; Xu, Wentao
2017-07-01
In this review, we introduce a new concept, precision toxicology: the mode of action of chemical- or drug-induced toxicity can be sensitively and specifically investigated by isolating a small group of cells or even a single cell with typical phenotype of interest followed by a single cell sequencing-based analysis. Precision toxicology can contribute to the better detection of subtle intracellular changes in response to exogenous substrates, and thus help researchers find solutions to control or relieve the toxicological effects that are serious threats to human health. We give examples for single cell isolation and recommend laser capture microdissection for in vivo studies and flow cytometric sorting for in vitro studies. In addition, we introduce the procedures for single cell sequencing and describe the expected application of these techniques to toxicological evaluations and mechanism exploration, which we believe will become a trend in toxicology.
In situ characterization of the brain-microdevice interface using Device Capture Histology
Woolley, Andrew J.; Desai, Himanshi A.; Steckbeck, Mitchell A.; Patel, Neil K.; Otto, Kevin J.
2011-01-01
Accurate assessment of brain-implantable microdevice bio-integration remains a formidable challenge. Prevailing histological methods require device extraction prior to tissue processing, often disrupting and removing the tissue of interest which had been surrounding the device. The Device-Capture Histology method, presented here, overcomes many limitations of the conventional Device-Explant Histology method, by collecting the device and surrounding tissue intact for subsequent labeling. With the implant remaining in situ, accurate and precise imaging of the morphologically preserved tissue at the brain/microdevice interface can then be collected and quantified. First, this article presents the Device-Capture Histology method for obtaining and processing the intact, undisturbed microdevice-tissue interface, and images using fluorescent labeling and confocal microscopy. Second, this article gives examples of how to quantify features found in the captured peridevice tissue. We also share histological data capturing 1) the impact of microdevice implantation on tissue, 2) the effects of an experimental anti-inflammatory coating, 3) a dense grouping of cell nuclei encapsulating a long-term implant, and 4) atypical oligodendrocyte organization neighboring a longterm implant. Data sets collected using the Device-Capture Histology method are presented to demonstrate the significant advantages of processing the intact microdevice-tissue interface, and to underscore the utility of the method in understanding the effects of the brain-implantable microdevices on nearby tissue. PMID:21802446
Parametric design and gridding through relational geometry
NASA Technical Reports Server (NTRS)
Letcher, John S., Jr.; Shook, D. Michael
1995-01-01
Relational Geometric Synthesis (RGS) is a new logical framework for building up precise definitions of complex geometric models from points, curves, surfaces and solids. RGS achieves unprecedented design flexibility by supporting a rich variety of useful curve and surface entities. During the design process, many qualitative and quantitative relationships between elementary objects may be captured and retained in a data structure equivalent to a directed graph, such that they can be utilized for automatically updating the complete model geometry following changes in the shape or location of an underlying object. Capture of relationships enables many new possibilities for parametric variations and optimization. Examples are given of panelization applications for submarines, sailing yachts, offshore structures, and propellers.
Calvin A. Farris; Ellis Q. Margolis; John A. Kupfer
2008-01-01
We compared the spatial characteristics of fire severity patches within individual fire ârunsâ (contiguous polygons burned during a given day) resulting from a 72,000 ha fire in centralIdaho in 1994. Our hypothesis was that patch characteristics of four fire severity classes (high, moderate, low, and unburned), as captured by five landscape metrics, would...
Fear no colors? Observer clothing color influences lizard escape behavior
Drury, Jonathan P.; Blumstein, Daniel T.; Pauly, Gregory B.
2017-01-01
Animals often view humans as predators, leading to alterations in their behavior. Even nuanced aspects of human activity like clothing color affect animal behavior, but we lack an understanding of when and where such effects will occur. The species confidence hypothesis posits that birds are attracted to colors found on their bodies and repelled by non-body colors. Here, we extend this hypothesis taxonomically and conceptually to test whether this pattern is applicable in a non-avian reptile and to suggest that species should respond less fearfully to their sexually-selected signaling color. Responses to clothing color could also be impacted by habituation to humans, so we examine whether behavior varied between areas with low and high human activity. We quantified the effects of four T-shirt colors on flight initiation distances (FID) and on the ease of capture in western fence lizards (Sceloporus occidentalis), and we accounted for detectability against the background environment. We found no differences in lizard behavior between sites. However, lizards tolerated the closest approaches and were most likely to be captured when approached with the T-shirt that resembled their sexually-selected signaling color. Because changes in individual behavior affect fitness, choice of clothing color by people, including tourists, hikers, and researchers, could impact wildlife populations and research outcomes. PMID:28792983
Kinematic parameters of signed verbs.
Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina
2013-10-01
Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.
Andrés, Pilar; Parmentier, Fabrice B R; Escera, Carles
2006-01-01
The aim of this study was to examine the effects of aging on the involuntary capture of attention by irrelevant sounds (distraction) and the use of these sounds as warning cues (alertness) in an oddball paradigm. We compared the performance of older and younger participants on a well-characterized auditory-visual distraction task. Based on the dissociations observed in aging between attentional processes sustained by the anterior and posterior attentional networks, our prediction was that distraction by irrelevant novel sounds would be stronger in older adults than in young adults while both groups would be equally able to use sound as an alert to prepare for upcoming stimuli. The results confirmed both predictions: there was a larger distraction effect in the older participants, but the alert effect was equivalent in both groups. These results give support to the frontal hypothesis of aging [Raz, N. (2000). Aging of the brain and its impact on cognitive performance: integration of structural and functional finding. In F.I.M. Craik & T.A. Salthouse (Eds.) Handbook of aging and cognition (pp. 1-90). Mahwah, NJ: Erlbaum; West, R. (1996). An application of prefrontal cortex function theory to cognitive aging. Psychological Bulletin, 120, 272-292].
Fear no colors? Observer clothing color influences lizard escape behavior.
Putman, Breanna J; Drury, Jonathan P; Blumstein, Daniel T; Pauly, Gregory B
2017-01-01
Animals often view humans as predators, leading to alterations in their behavior. Even nuanced aspects of human activity like clothing color affect animal behavior, but we lack an understanding of when and where such effects will occur. The species confidence hypothesis posits that birds are attracted to colors found on their bodies and repelled by non-body colors. Here, we extend this hypothesis taxonomically and conceptually to test whether this pattern is applicable in a non-avian reptile and to suggest that species should respond less fearfully to their sexually-selected signaling color. Responses to clothing color could also be impacted by habituation to humans, so we examine whether behavior varied between areas with low and high human activity. We quantified the effects of four T-shirt colors on flight initiation distances (FID) and on the ease of capture in western fence lizards (Sceloporus occidentalis), and we accounted for detectability against the background environment. We found no differences in lizard behavior between sites. However, lizards tolerated the closest approaches and were most likely to be captured when approached with the T-shirt that resembled their sexually-selected signaling color. Because changes in individual behavior affect fitness, choice of clothing color by people, including tourists, hikers, and researchers, could impact wildlife populations and research outcomes.
Target Discovery for Precision Medicine Using High-Throughput Genome Engineering.
Guo, Xinyi; Chitale, Poonam; Sanjana, Neville E
2017-01-01
Over the past few years, programmable RNA-guided nucleases such as the CRISPR/Cas9 system have ushered in a new era of precision genome editing in diverse model systems and in human cells. Functional screens using large libraries of RNA guides can interrogate a large hypothesis space to pinpoint particular genes and genetic elements involved in fundamental biological processes and disease-relevant phenotypes. Here, we review recent high-throughput CRISPR screens (e.g. loss-of-function, gain-of-function, and targeting noncoding elements) and highlight their potential for uncovering novel therapeutic targets, such as those involved in cancer resistance to small molecular drugs and immunotherapies, tumor evolution, infectious disease, inborn genetic disorders, and other therapeutic challenges.
Ackerman, David M; Wang, Jing; Wendel, Joseph H; Liu, Da-Jiang; Pruski, Marek; Evans, James W
2011-03-21
We analyze the spatiotemporal behavior of species concentrations in a diffusion-mediated conversion reaction which occurs at catalytic sites within linear pores of nanometer diameter. Diffusion within the pores is subject to a strict single-file (no passing) constraint. Both transient and steady-state behavior is precisely characterized by kinetic Monte Carlo simulations of a spatially discrete lattice-gas model for this reaction-diffusion process considering various distributions of catalytic sites. Exact hierarchical master equations can also be developed for this model. Their analysis, after application of mean-field type truncation approximations, produces discrete reaction-diffusion type equations (mf-RDE). For slowly varying concentrations, we further develop coarse-grained continuum hydrodynamic reaction-diffusion equations (h-RDE) incorporating a precise treatment of single-file diffusion in this multispecies system. The h-RDE successfully describe nontrivial aspects of transient behavior, in contrast to the mf-RDE, and also correctly capture unreactive steady-state behavior in the pore interior. However, steady-state reactivity, which is localized near the pore ends when those regions are catalytic, is controlled by fluctuations not incorporated into the hydrodynamic treatment. The mf-RDE partly capture these fluctuation effects, but cannot describe scaling behavior of the reactivity.
NASA Astrophysics Data System (ADS)
Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Improvement of Stand Jig Sealer and Its Increased Production Capacity
NASA Astrophysics Data System (ADS)
Soebandrija, K. E. N.; Astuti, S. W. D.
2014-03-01
This paper has the objective to prove that improvement of Stand Jig Sealer can lead to the cycle time target as part of Improvement efforts and its Productivity. Prior researches through prior journals both classics journal such as Quesnay (1766) and Solow (1957) and updated journal such as Reikard (2011) researches, are mentioned and elaborated. Precisely, the research is narrowed down and specified into automotive industry and eventually the software related of SPSS and Structural Equation Modeling ( SEM ). The analysis and its method are conducted through the calculation working time. The mentioned calculation are reinforced with the hypothesis test using SPSS Version 19 and involve parameters of production efficiency, productivity calculation, and the calculation of financial investments. The results obtained are augmented achievement of cycle time target ≤ 80 seconds posterior to improvement stand jig sealer. The result from calculation of SPSS-19 version comprise the following aspects: the one-sided hypothesis test is rejection of Ho:μ≥80 seconds, the correlation rs=0.84, regression y = 0.159+0.642x, validity R table = 0.4438, reliability value of Cronbach's alpha = 0.885>0.70, independence (Chi Square) Asymp. Sig=0.028<0.05, 95% efficiency, increase productivity 11%, financial analysis (NPV 2,340,596>0, PI 2.04>1, IRR 45.56%>i=12.68%, PP=1.86). The Mentioned calculation results support the hypothesis and ultimately align with the objective of this paper to prove that improvement of Stand Jig Sealer and its relation toward the cycle time target. Precisely, the improvement of production capacity of PT. Astra Daihatsu Motor.
Understanding the Role of P Values and Hypothesis Tests in Clinical Research.
Mark, Daniel B; Lee, Kerry L; Harrell, Frank E
2016-12-01
P values and hypothesis testing methods are frequently misused in clinical research. Much of this misuse appears to be owing to the widespread, mistaken belief that they provide simple, reliable, and objective triage tools for separating the true and important from the untrue or unimportant. The primary focus in interpreting therapeutic clinical research data should be on the treatment ("oomph") effect, a metaphorical force that moves patients given an effective treatment to a different clinical state relative to their control counterparts. This effect is assessed using 2 complementary types of statistical measures calculated from the data, namely, effect magnitude or size and precision of the effect size. In a randomized trial, effect size is often summarized using constructs, such as odds ratios, hazard ratios, relative risks, or adverse event rate differences. How large a treatment effect has to be to be consequential is a matter for clinical judgment. The precision of the effect size (conceptually related to the amount of spread in the data) is usually addressed with confidence intervals. P values (significance tests) were first proposed as an informal heuristic to help assess how "unexpected" the observed effect size was if the true state of nature was no effect or no difference. Hypothesis testing was a modification of the significance test approach that envisioned controlling the false-positive rate of study results over many (hypothetical) repetitions of the experiment of interest. Both can be helpful but, by themselves, provide only a tunnel vision perspective on study results that ignores the clinical effects the study was conducted to measure.
Buser, Thaddaeus J; Sidlauskas, Brian L; Summers, Adam P
2018-05-01
We contrast 2D vs. 3D landmark-based geometric morphometrics in the fish subfamily Oligocottinae by using 3D landmarks from CT-generated models and comparing the morphospace of the 3D landmarks to one based on 2D landmarks from images. The 2D and 3D shape variables capture common patterns across taxa, such that the pairwise Procrustes distances among taxa correspond and the trends captured by principal component analysis are similar in the xy plane. We use the two sets of landmarks to test several ecomorphological hypotheses from the literature. Both 2D and 3D data reject the hypothesis that head shape correlates significantly with the depth at which a species is commonly found. However, in taxa where shape variation in the z-axis is high, the 2D shape variables show sufficiently strong distortion to influence the outcome of the hypothesis tests regarding the relationship between mouth size and feeding ecology. Only the 3D data support previous studies which showed that large mouth sizes correlate positively with high percentages of elusive prey in the diet. When used to test for morphological divergence, 3D data show no evidence of divergence, while 2D data show that one clade of oligocottines has diverged from all others. This clade shows the greatest degree of z-axis body depth within Oligocottinae, and we conclude that the inability of the 2D approach to capture this lateral body depth causes the incongruence between 2D and 3D analyses. Anat Rec, 301:806-818, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Yee, Wee L.
2014-01-01
Abstract Seasonal distributions of the western cherry fruit fly, Rhagoletis indifferens Curran (Diptera: Tephritidae), in sweet cherry ( Prunus avium (L.) L.) (major host), black hawthorn (occasional developmental host) ( Crataegus douglasii Lindley), and other trees were determined in a ponderosa pine ecosystem in Washington state, USA. The hypothesis that most fly dispersal from cherry trees occurs after fruit senesce or drop was tested, with emphasis on movement to black hawthorn trees. Sweet cherry fruit developed earlier than black hawthorn, bitter cherry (common host), choke cherry, and apple fruit. Flies were usually captured first in sweet cherry trees but were caught in bitter cherry and other trees throughout the season. Peak fly capture periods in sweet cherry began around the same time or slightly earlier than in other trees. However, peak fly capture periods in black hawthorn and other nonsweet cherry trees continued after peak periods in sweet cherry ended, or relative fly numbers within sweet cherry declined more quickly than those within other trees. Larvae were reared from sweet and bitter cherry but not black hawthorn fruit. Results provide partial support for the hypothesis in that although R. indifferens commonly disperses from sweet cherry trees with fruit, it could disperse more, or more flies are retained in nonsweet cherry trees after than before sweet cherries drop. This could allow opportunities for the flies to use other fruit for larval development. Although R . indifferens infestation in black hawthorn was not detected, early season fly dispersal to this and other trees and fly presence in bitter cherry could make fly management in sweet cherry difficult. PMID:25527581
Almaguer-Melian, William; Bergado-Rosado, Jorge; Pavón-Fuentes, Nancy; Alberti-Amador, Esteban; Mercerón-Martínez, Daymara; Frey, Julietta U
2012-01-17
Novelty processing can transform short-term into long-term memory. We propose that this memory-reinforcing effect of novelty could be explained by mechanisms outlined in the "synaptic tagging hypothesis." Initial short-term memory is sustained by a transient plasticity change at activated synapses and sets synaptic tags. These tags are later able to capture and process the plasticity-related proteins (PRPs), which are required to transform a short-term synaptic change into a long-term one. Novelty is involved in inducing the synthesis of PRPs [Moncada D, et al. (2011) Proc Natl Acad Sci USA 108:12937-12936], which are then captured by the tagged synapses, consolidating memory. In contrast to novelty, stress can impair learning, memory, and synaptic plasticity. Here, we address questions as to whether novelty-induced PRPs are able to prevent the loss of memory caused by stress and if the latter would not interact with the tag-setting process. We used water-maze (WM) training as a spatial learning paradigm to test our hypothesis. Stress was induced by a strong foot shock (FS; 5 × 1 mA, 2 s) applied 5 min after WM training. Our data show that FS reduced long-term but not short-term memory in the WM paradigm. This negative effect on memory consolidation was time- and training-dependent. Interestingly, novelty exposure prevented the stress-induced memory loss of the spatial task and increased BDNF and Arc expression. This rescuing effect was blocked by anisomycin, suggesting that WM-tagged synapses were not reset by FS and were thus able to capture the novelty-induced PRPs, re-establishing FS-impaired long-term memory.
Yee, Wee L
2014-01-01
Seasonal distributions of the western cherry fruit fly, Rhagoletis indifferens Curran (Diptera: Tephritidae), in sweet cherry (Prunus avium (L.) L.) (major host), black hawthorn (occasional developmental host) (Crataegus douglasii Lindley), and other trees were determined in a ponderosa pine ecosystem in Washington state, USA. The hypothesis that most fly dispersal from cherry trees occurs after fruit senesce or drop was tested, with emphasis on movement to black hawthorn trees. Sweet cherry fruit developed earlier than black hawthorn, bitter cherry (common host), choke cherry, and apple fruit. Flies were usually captured first in sweet cherry trees but were caught in bitter cherry and other trees throughout the season. Peak fly capture periods in sweet cherry began around the same time or slightly earlier than in other trees. However, peak fly capture periods in black hawthorn and other nonsweet cherry trees continued after peak periods in sweet cherry ended, or relative fly numbers within sweet cherry declined more quickly than those within other trees. Larvae were reared from sweet and bitter cherry but not black hawthorn fruit. Results provide partial support for the hypothesis in that although R. indifferens commonly disperses from sweet cherry trees with fruit, it could disperse more, or more flies are retained in nonsweet cherry trees after than before sweet cherries drop. This could allow opportunities for the flies to use other fruit for larval development. Although R. indifferens infestation in black hawthorn was not detected, early season fly dispersal to this and other trees and fly presence in bitter cherry could make fly management in sweet cherry difficult. Published by Oxford University Press on behalf of the Entomological Society of America 2014. This work is written by a US Government employee and is in the public domain in the US.
Root System Water Consumption Pattern Identification on Time Series Data
Figueroa, Manuel; Pope, Christopher
2017-01-01
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers’ detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system’s 0.348 precision. PMID:28621739
Root System Water Consumption Pattern Identification on Time Series Data.
Figueroa, Manuel; Pope, Christopher
2017-06-16
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers' detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system's 0.348 precision.
Determination of the mass of globular cluster X-ray sources
NASA Technical Reports Server (NTRS)
Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.
1984-01-01
The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Micro- and nanoengineering for stem cell biology: the promise with a caution.
Kshitiz; Kim, Deok-Ho; Beebe, David J; Levchenko, Andre
2011-08-01
Current techniques used in stem cell research only crudely mimic the physiological complexity of the stem cell niches. Recent advances in the field of micro- and nanoengineering have brought an array of in vitro cell culture models that have enabled development of novel, highly precise and standardized tools that capture physiological details in a single platform, with greater control, consistency, and throughput. In this review, we describe the micro- and nanotechnology-driven modern toolkit for stem cell biologists to design novel experiments in more physiological microenvironments with increased precision and standardization, and caution them against potential challenges that the modern technologies might present. Copyright © 2011 Elsevier Ltd. All rights reserved.
Research on the method of precise alignment technology of atmospheric laser communication
NASA Astrophysics Data System (ADS)
Chen, Wen-jian; Gao, Wei; Duan, Yuan-yuan; Ma, Shi-wei; Chen, Jian
2016-10-01
Atmosphere laser communication takes advantage of laser as the carrier transmitting the voice, data, and image information in the atmosphere. Because of its high reliability, strong anti-interference ability, the advantages of easy installation, it has great potential and development space in the communications field. In the process of establish communication, the capture, targeting and tracking of the communication signal is the key technology. This paper introduce a method of targeting the signal spot in the process of atmosphere laser communication, which through the way of making analog signal addition and subtraction directly and normalized to obtain the target azimuth information to drive the servo system to achieve precise alignment of tracking.
Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon
2015-02-03
Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.
Temporal changes and sexual differences in spatial distribution of Burbot in Lake Erie
Stapanian, Martin A.; Witzel, Larry D.; Cook, Andy
2013-01-01
We used GIS mapping techniques to examine capture data for Burbot Lota lota from annual gill-net surveys in Canadian waters of Lake Erie during late August and September 1994–2011. Adult males were captured over a larger area (3–17% for ≥20% maximum yearly catch [MYC]) than adult females. More males than females were caught in the gill nets in 14 of the 15 study years. Collectively, these results support a hypothesis of greater activity by adult males during summer, when Burbot are actively feeding. The area of capture contracted by more than 60% (for ≥20% MYC) for both sexes during the time period, which is consistent with the documented decrease of the Burbot population in the lake. The sex ratio (females: males) varied over the time series but declined steadily from 0.97 in 2001 to 0.59 in 2011. The overlap in the capture areas of adult males and females was scale dependent. The depth distribution at which adult Burbot were caught did not change over the time series, and there was no difference in the median depths (about 30 m) at which adult male and female Burbot were caught. The last results are consistent with the Burbot's reliance on coldwater habitats. Additional research is recommended, including telemetry to describe daily and seasonal movements and assessment of gender bias in active and passive capture gear.
Rhythmic arm movements are less affected than discrete ones after a stroke.
Leconte, Patricia; Orban de Xivry, Jean-Jacques; Stoquart, Gaëtan; Lejeune, Thierry; Ronsse, Renaud
2016-06-01
Recent reports indicate that rhythmic and discrete upper-limb movements are two different motor primitives which recruit, at least partially, distinct neural circuitries. In particular, rhythmic movements recruit a smaller cortical network than discrete movements. The goal of this paper is to compare the levels of disability in performing rhythmic and discrete movements after a stroke. More precisely, we tested the hypothesis that rhythmic movements should be less affected than discrete ones, because they recruit neural circuitries that are less likely to be damaged by the stroke. Eleven stroke patients and eleven age-matched control subjects performed discrete and rhythmic movements using an end-effector robot (REAplan). The rhythmic movement condition was performed with and without visual targets to further decrease cortical recruitment. Movement kinematics was analyzed through specific metrics, capturing the degree of smoothness and harmonicity. We reported three main observations: (1) the movement smoothness of the paretic arm was more severely degraded for discrete movements than rhythmic movements; (2) most of the patients performed rhythmic movements with a lower harmonicity than controls; and (3) visually guided rhythmic movements were more altered than non-visually guided rhythmic movements. These results suggest a hierarchy in the levels of impairment: Discrete movements are more affected than rhythmic ones, which are more affected if they are visually guided. These results are a new illustration that discrete and rhythmic movements are two fundamental primitives in upper-limb movements. Moreover, this hierarchy of impairment opens new post-stroke rehabilitation perspectives.
Naraghi, Safa; Mutsvangwa, Tinashe; Goliath, René; Rangaka, Molebogeng X; Douglas, Tania S
2018-05-08
The tuberculin skin test is the most widely used method for detecting latent tuberculosis infection in adults and active tuberculosis in children. We present the development of a mobile-phone based screening tool for measuring the tuberculin skin test induration. The tool makes use of a mobile application developed on the Android platform to capture images of an induration, and photogrammetric reconstruction using Agisoft PhotoScan to reconstruct the induration in 3D, followed by 3D measurement of the induration with the aid of functions from the Python programming language. The system enables capture of images by the person being screened for latent tuberculosis infection. Measurement precision was tested using a 3D printed induration. Real-world use of the tool was simulated by application to a set of mock skin indurations, created by a make-up artist, and the performance of the tool was evaluated. The usability of the application was assessed with the aid of a questionnaire completed by participants. The tool was found to measure the 3D printed induration with greater precision than the current ruler and pen method, as indicated by the lower standard deviation produced (0.3 mm versus 1.1 mm in the literature). There was high correlation between manual and algorithm measurement of mock skin indurations. The height of the skin induration and the definition of its margins were found to influence the accuracy of 3D reconstruction and therefore the measurement error, under simulated real-world conditions. Based on assessment of the user experience in capturing images, a simplified user interface would benefit wide-spread implementation. The mobile application shows good agreement with direct measurement. It provides an alternative method for measuring tuberculin skin test indurations and may remove the need for an in-person follow-up visit after test administration, thus improving latent tuberculosis infection screening throughput. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chu, Zhongyi; Ma, Ye; Hou, Yueyang; Wang, Fengwen
2017-02-01
This paper presents a novel identification method for the intact inertial parameters of an unknown object in space captured by a manipulator in a space robotic system. With strong dynamic and kinematic coupling existing in the robotic system, the inertial parameter identification of the unknown object is essential for the ideal control strategy based on changes in the attitude and trajectory of the space robot via capturing operations. Conventional studies merely refer to the principle and theory of identification, and an error analysis process of identification is deficient for a practical scenario. To solve this issue, an analysis of the effect of errors on identification is illustrated first, and the accumulation of measurement or estimation errors causing poor identification precision is demonstrated. Meanwhile, a modified identification equation incorporating the contact force, as well as the force/torque of the end-effector, is proposed to weaken the accumulation of errors and improve the identification accuracy. Furthermore, considering a severe disturbance condition caused by various measured noises, the hybrid immune algorithm, Recursive Least Squares and Affine Projection Sign Algorithm (RLS-APSA), is employed to decode the modified identification equation to ensure a stable identification property. Finally, to verify the validity of the proposed identification method, the co-simulation of ADAMS-MATLAB is implemented by multi-degree of freedom models of a space robotic system, and the numerical results show a precise and stable identification performance, which is able to guarantee the execution of aerospace operations and prevent failed control strategies.
Long term observation of low altitude atmosphere by high precision polarization lidar
NASA Astrophysics Data System (ADS)
Shiina, Tatsuo; Noguchi, Kazuo; Fukuchi, Tetsuo
2011-11-01
Prediction of weather disaster such as heavy rain and light strike is an earnest desire. Successive monitoring of the low altitude atmosphere is important to predict it. The weather disaster often befalls with a steep change in a local area. It is hard for usual meteorological equipments to capture and alert it speedily. We have been developed the near range lidar to capture and analyze the low altitude atmosphere. In this study, high precision polarization lidar was developed to observe the low altitude atmosphere. This lidar has the high extinction ratio of polarization of >30dB to detect the small polarization change of the atmosphere. The change of the polarization in the atmosphere leads to the detection of the depolarization effect and the Faraday effect, which are caused by ice-crystals and lightning discharge, respectively. As the lidar optics is "inline" type, which means common use of optics for transmitter and receiver, it can observe the near range echo with the narrow field of view. The long-term observation was accomplished at low elevation angle. It aims to monitor the low altitude atmosphere under the cloud base and capture its spatial distribution and convection process. In the viewpoint of polarization, the ice-crystals' flow and concentration change of the aerosols are monitored. The observation has been continued in the cloudy and rainy days. The thunder cloud is also a target. In this report, the system specification is explained to clear the potential and the aims. The several observation data including the long-term observation will be shown with the consideration of polarization analysis.
On the origin of Triton and Pluto
NASA Technical Reports Server (NTRS)
Mckinnon, W. B.
1984-01-01
Lyttleton's (1936) hypothesis that Triton and Pluto originated as adjacent prograde satellites of Neptune is evaluated, and it is shown that with the presently accepted masses of Triton and Pluto-Charon, the momentum and energy exchange required to set Triton on a retrograde orbit is impossible. The Pluto-Charon system could not have acquired its present angular momentum state during an ejection event unless a physical collision was involved, which is quite unlikely. The simplest hypothesis is that Triton and Pluto are independent representatives of large outer solar system planetesimals. Triton is simply captured, with spectacular consequences that include runaway melting of interior ices and release to the surface of clathrated CH4, CO, and N2. Condensed remnants of this protoatmosphere could account for features in Triton's unique spectrum.
On the Origin of Triton and Pluto
NASA Technical Reports Server (NTRS)
Mckinnon, W. B.
1985-01-01
Lyttleton's (1936) hypothesis that Triton and Pluto originated as adjacent prograde satellites of Neptune is evaluated, and it is shown that with the presently accepted masses of Triton and Pluto-Charon, the momentum and energy exchange required to sell Triton on a retrograde orbit is impossible. The Pluto-Charon system could not have acquired its present angular momentum state during an ejection event unless a physical collision was involved, which is quite unlikely. The simplest hypothesis is that Triton and Pluto are independent representatives of large outer solar system planetesimals. Triton is simply captured, with spectacular consequences that include runaway melting of interior ices and release to the surface of clathrated CH4, CO, and N2. Condensed remnants of this protoatmosphere could account for features in Triton's unique spectrum.
Sasikala, Wilbee D; Mukherjee, Arnab
2012-10-11
DNA intercalation, a biophysical process of enormous clinical significance, has surprisingly eluded molecular understanding for several decades. With appropriate configurational restraint (to prevent dissociation) in all-atom metadynamics simulations, we capture the free energy surface of direct intercalation from minor groove-bound state for the first time using an anticancer agent proflavine. Mechanism along the minimum free energy path reveals that intercalation happens through a minimum base stacking penalty pathway where nonstacking parameters (Twist→Slide/Shift) change first, followed by base stacking parameters (Buckle/Roll→Rise). This mechanism defies the natural fluctuation hypothesis and provides molecular evidence for the drug-induced cavity formation hypothesis. The thermodynamic origin of the barrier is found to be a combination of entropy and desolvation energy.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Precision Measurement of the Beryllium-7 Solar Neutrino Interaction Rate in Borexino
NASA Astrophysics Data System (ADS)
Saldanha, Richard Nigel
Solar neutrinos, since their first detection nearly forty years ago, have revealed valuable information regarding the source of energy production in the Sun, and have demonstrated that neutrino oscillations are well described by the Large Mixing Angle (LMA) oscillation parameters with matter interactions due to the Mikheyev-Smirnov-Wolfenstein (MSW) effect. This thesis presents a precision measurement of the 7Be solar neutrino interaction rate within Borexino, an underground liquid scintillator detector that is designed to measure solar neutrino interactions through neutrino-electron elastic scattering. The thesis includes a detailed description of the analysis techniques developed and used for this measurement as well as an evaluation of the relevant systematic uncertainties that affect the precision of the result. The rate of neutrino-electron elastic scattering from 0.862 MeV 7Be neutrinos is determined to be 45.4 +/- 1.6 (stat) +/- 1.5 (sys) counts/day/100 ton. Due to extensive detector calibrations and improved analysis methods, the systematic uncertainty in the interaction rate has been reduced by more than a factor of two from the previous evaluation. In the no-oscillation hypothesis, the interaction rate corresponds to a 0.862 MeV 7Be electron neutrino flux of (2.75 +/- 0.13) x 10 9 cm-2 sec-1. Including the predicted neutrino flux from the Standard Solar Model yields an electron neutrino survival probability of Pee 0.51 +/- 0.07 and rules out the no-oscillation hypothesis at 5.1sigma The LMA-MSW neutrino oscillation model predicts a transition in the solar Pee value between low (< 1 MeV) and high (> 10 MeV) energies which has not yet been experimentally confirmed. This result, in conjunction with the Standard Solar Model, represents the most precise measurement of the electron neutrino survival probability for solar neutrinos at sub-MeV energies.
Practical scheme for optimal measurement in quantum interferometric devices
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide
2003-06-01
We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.
Complex high affinity interactions occur between MHCI and superantigens
NASA Technical Reports Server (NTRS)
Chapes, S. K.; Herpich, A. R.; Spooner, B. S. (Principal Investigator)
1998-01-01
Staphylococcal enterotoxins A and C1 (SEA or SEC1) bound to major histocompatibility-I (MHCI) molecules with high affinity (binding constants ranging from 1.1 microM to 79 nM). SEA and SEC1 directly bound MHCI molecules that had been captured by monoclonal antibodies specific for H-2Kk, H-2Dk, or both. In addition, MHCI-specific antibodies inhibited the binding of SEC1 to LM929 cells and SEA competitively inhibited SEC1 binding; indicating that the superantigens bound to MHCI on the cell surface. The affinity and number of superantigen binding sites differed depending on whether MHCI was expressed in the membrane of LM929 cells or whether it was captured. These data support the hypothesis that MHCI molecules can serve as superantigen receptors.
The Lunar Cataclysm and How LRO Can Help Test It
NASA Technical Reports Server (NTRS)
Cohen, Barbara A.
2009-01-01
One of the important outstanding goals of lunar science is understanding the bombardment history of the Moon and calibrating the impact flux curve for extrapolation to the Earth and other terrestrial planets. The "terminal lunar cataclysm," a brief but intense period of bombardment about 3.9 billion years ago, is of particular scientific interest. Radiometric dating of lunar impact-melt rocks forms the backbone of the lunar cataclysm hypothesis. A histogram of precise age determinations of impact-melt rocks shows the characteristics of the classic formulation of the lunar cataclysm hypothesis: a sharp peak at 3.9 Ga, a steep decline after 3.9 Ga perhaps only 20-200 Myr long, and few rocks of impact origin prior to 4.0 Ga.
Butterfill, Stephen A
2015-11-01
What evidence could bear on questions about whether humans ever perceptually experience any of another's mental states, and how might those questions be made precise enough to test experimentally? This paper focusses on emotions and their expression. It is proposed that research on perceptual experiences of physical properties provides one model for thinking about what evidence concerning expressions of emotion might reveal about perceptual experiences of others' mental states. This proposal motivates consideration of the hypothesis that categorical perception of expressions of emotion occurs, can be facilitated by information about agents' emotions, and gives rise to phenomenal expectations. It is argued that the truth of this hypothesis would support a modest version of the claim that humans sometimes perceptually experience some of another's mental states. Much available evidence is consistent with, but insufficient to establish, the truth of the hypothesis. We are probably not yet in a position to know whether humans ever perceptually experience others' mental states. Copyright © 2015 Elsevier Inc. All rights reserved.
Isotopic Resonance Hypothesis: Experimental Verification by Escherichia coli Growth Measurements
NASA Astrophysics Data System (ADS)
Xie, Xueshu; Zubarev, Roman A.
2015-03-01
Isotopic composition of reactants affects the rates of chemical and biochemical reactions. As a rule, enrichment of heavy stable isotopes leads to progressively slower reactions. But the recent isotopic resonance hypothesis suggests that the dependence of the reaction rate upon the enrichment degree is not monotonous. Instead, at some ``resonance'' isotopic compositions, the kinetics increases, while at ``off-resonance'' compositions the same reactions progress slower. To test the predictions of this hypothesis for the elements C, H, N and O, we designed a precise (standard error +/-0.05%) experiment that measures the parameters of bacterial growth in minimal media with varying isotopic composition. A number of predicted resonance conditions were tested, with significant enhancements in kinetics discovered at these conditions. The combined statistics extremely strongly supports the validity of the isotopic resonance phenomenon (p << 10-15). This phenomenon has numerous implications for the origin of life studies and astrobiology, and possible applications in agriculture, biotechnology, medicine, chemistry and other areas.
NASA Astrophysics Data System (ADS)
von Freyberg, Jana; Studer, Bjørn; Kirchner, James W.
2017-03-01
High-frequency measurements of solutes and isotopes (18O and 2H) in rainfall and streamflow can shed important light on catchment flow pathways and travel times, but the workload and sample storage artifacts involved in collecting, transporting, and analyzing thousands of bottled samples severely constrain catchment studies in which conventional sampling methods are employed. However, recent developments towards more compact and robust analyzers have now made it possible to measure chemistry and water isotopes in the field at sub-hourly frequencies over extended periods. Here, we present laboratory and field tests of a membrane-vaporization continuous water sampler coupled to a cavity ring-down spectrometer for real-time measurements of δ18O and δ2H combined with a dual-channel ion chromatograph (IC) for the synchronous analysis of major cations and anions. The precision of the isotope analyzer was typically better than 0.03 ‰ for δ18O and 0.17 ‰ for δ2H in 10 min average readings taken at intervals of 30 min. Carryover effects were less than 1.2 % between isotopically contrasting water samples for 30 min sampling intervals, and instrument drift could be corrected through periodic analysis of secondary reference standards. The precision of the ion chromatograph was typically ˜ 0.1-1 ppm or better, with relative standard deviations of ˜ 1 % or better for most major ions in stream water, which is sufficient to detect subtle biogeochemical signals in catchment runoff. We installed the coupled isotope analyzer/IC system in an uninsulated hut next to a stream of a small catchment and analyzed stream water and precipitation samples every 30 min over 28 days. These high-frequency measurements facilitated a detailed comparison of event-water fractions via endmember mixing analysis with both chemical and isotope tracers. For two events with relatively dry antecedent moisture conditions, the event-water fractions were < 21 % based on isotope tracers but were significantly overestimated (40 to 82 %) by the chemical tracers. These observations, coupled with the storm-to-storm patterns in precipitation isotope inputs and the associated stream water isotope response, led to a conceptual hypothesis for runoff generation in the catchment. Under this hypothesis, the pre-event water that is mobilized by precipitation events may, depending on antecedent moisture conditions, be significantly shallower, younger, and less mineralized than the deeper, older water that feeds baseflow and thus defines the pre-event
endmember used in hydrograph separation. This proof-of-concept study illustrates the potential advantages of capturing isotopic and hydrochemical behavior at a high frequency over extended periods that span multiple hydrologic events.
[Contemporary cognitive theories about developmental dyscalculia].
Castro-Cañizares, D; Estévez-Pérez, N; Reigosa-Crespo, V
To analyze the current theories describing the cognitive mechanisms underlying developmental dyscalculia. The four most researched hypotheses concerning the cognitive deficits related to developmental dyscalculia, as well as experimental evidences supporting or refusing them are presented. The first hypothesis states that developmental dyscalculia is consequence of domain general cognitive deficits. The second hypothesis suggests that it is due to a failure in the development of specialized brain systems dedicated to numerosity processing. The third hypothesis asserts the disorder is caused by a deficit in accessing quantity representation through numerical symbols. The last hypothesis states developmental dyscalculia appears as a consequence of impairments in a generalized magnitude system dedicated to the processing of continuous and discrete magnitudes. None of the hypotheses has been proven more plausible than the rest. Relevant issues rose by them need to be revisited and answered in the light of new experimental designs. In the last years the understanding of cognitive disorders involved in developmental dyscalculia has remarkably increased, but it is nonetheless insufficient. Additional research is required in order to achieve a comprehensive cognitive model of numerical processing development and its disorders. This will improve the diagnostic precision and the effectiveness of developmental dyscalculia intervention strategies.
High-precision mass measurements for the rp-process at JYFLTRAP
NASA Astrophysics Data System (ADS)
Canete, Laetitia; Eronen, Tommi; Jokinen, Ari; Kankainen, Anu; Moore, Ian D.; Nesterenko, Dimitry; Rinta-Antila, Sami
2018-01-01
The double Penning trap JYFLTRAP at the University of Jyväskylä has been successfully used to achieve high-precision mass measurements of nuclei involved in the rapid proton-capture (rp) process. A precise mass measurement of 31Cl is essential to estimate the waiting point condition of 30S in the rp-process occurring in type I x-ray bursts (XRBs). The mass-excess of 31C1 measured at JYFLTRAP, -7034.7(3.4) keV, is 15 more precise than the value given in the Atomic Mass Evaluation 2012. The proton separation energy Sp determined from the new mass-excess value confirmed that 30S is a waiting point, with a lower-temperature limit of 0.44 GK. The mass of 52Co effects both 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions. The mass-excess value measured, - 34 331.6(6.6) keV is 30 times more precise than the value given in AME2012. The Q values for the 51Fe(p,γ)52Co and 52Co(p,γ)53Ni reactions are now known with a high precision, 1418(11) keV and 2588(26) keV respectively. The results show that 52Co is more proton bound and 53Ni less proton bound than what was expected from the extrapolated value.
The potamochemical symphony: new progress in the high-frequency acquisition of stream chemical data
NASA Astrophysics Data System (ADS)
Floury, Paul; Gaillardet, Jérôme; Gayer, Eric; Bouchez, Julien; Tallec, Gaëlle; Ansart, Patrick; Koch, Frédéric; Gorge, Caroline; Blanchouin, Arnaud; Roubaty, Jean-Louis
2017-12-01
Our understanding of hydrological and chemical processes at the catchment scale is limited by our capacity to record the full breadth of the information carried by river chemistry, both in terms of sampling frequency and precision. Here, we present a proof-of-concept study of a lab in the field
called the River Lab
(RL), based on the idea of permanently installing a suite of laboratory instruments in the field next to a river. Housed in a small shed, this set of instruments performs analyses at a frequency of one every 40 min for major dissolved species (Na+, K+, Mg2+, Ca2+, Cl-, SO42-, NO3-) through continuous sampling and filtration of the river water using automated ion chromatographs. The RL was deployed in the Orgeval Critical Zone Observatory, France for over a year of continuous analyses. Results show that the RL is able to capture long-term fine chemical variations with no drift and a precision significantly better than conventionally achieved in the laboratory (up to ±0.5 % for all major species for over a day and up to 1.7 % over 2 months). The RL is able to capture the abrupt changes in dissolved species concentrations during a typical 6-day rain event, as well as daily oscillations during a hydrological low-flow period of summer drought. Using the measured signals as a benchmark, we numerically assess the effects of a lower sampling frequency (typical of conventional field sampling campaigns) and of a lower precision (typically reached in the laboratory) on the hydrochemical signal. The high-resolution, high-precision measurements made possible by the RL open new perspectives for understanding critical zone hydro-bio-geochemical cycles. Finally, the RL also offers a solution for management agencies to monitor water quality in quasi-real time.
Ultrasonic sensor and method of use
Condreva, Kenneth J.
2001-01-01
An ultrasonic sensor system and method of use for measuring transit time though a liquid sample, using one ultrasonic transducer coupled to a precision time interval counter. The timing circuit captures changes in transit time, representing small changes in the velocity of sound transmitted, over necessarily small time intervals (nanoseconds) and uses the transit time changes to identify the presence of non-conforming constituents in the sample.
High-precision branching ratio measurement for the superallowed β+ emitter Ga62
NASA Astrophysics Data System (ADS)
Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Towner, I. S.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-08-01
A high-precision branching ratio measurement for the superallowed β+ decay of Ga62 was performed at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. The 8π spectrometer, an array of 20 high-purity germanium detectors, was employed to detect the γ rays emitted following Gamow-Teller and nonanalog Fermi β+ decays of Ga62, and the SCEPTAR plastic scintillator array was used to detect the emitted β particles. Thirty γ rays were identified following Ga62 decay, establishing the superallowed branching ratio to be 99.858(8)%. Combined with the world-average half-life and a recent high-precision Q-value measurement for Ga62, this branching ratio yields an ft value of 3074.3±1.1 s, making Ga62 among the most precisely determined superallowed ft values. Comparison between the superallowed ft value determined in this work and the world-average corrected F tmacr value allows the large nuclear-structure-dependent correction for Ga62 decay to be experimentally determined from the CVC hypothesis to better than 7% of its own value, the most precise experimental determination for any superallowed emitter. These results provide a benchmark for the refinement of the theoretical description of isospin-symmetry breaking in A⩾62 superallowed decays.
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
2018-01-01
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text — found in biomedical publications and clinical notes — is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine. PMID:27807747
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
2016-01-01
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine.
Beyond precision surgery: Molecularly motivated precision care for gastric cancer.
Choi, Y Y; Cheong, J-H
2017-05-01
Gastric cancer is one of the leading causes of cancer-related deaths worldwide. Despite the high disease prevalence, gastric cancer research has not gained much attention. Recently, genome-scale technology has made it possible to explore the characteristics of gastric cancer at the molecular level. Accordingly, gastric cancer can be classified into molecular subtypes that convey more detailed information of tumor than histopathological characteristics, and these subtypes are associated with clinical outcomes. Furthermore, this molecular knowledge helps to identify new actionable targets and develop novel therapeutic strategies. To advance the concept of precision patient care in the clinic, patient-derived xenograft (PDX) models have recently been developed. PDX models not only represent histology and genomic features, but also predict responsiveness to investigational drugs in patient tumors. Molecularly curated PDX cohorts will be instrumental in hypothesis generation, biomarker discovery, and drug screening and testing in proof-of-concept preclinical trials for precision therapy. In the era of precision medicine, molecularly tailored therapeutic strategies should be individualized for cancer patients. To improve the overall clinical outcome, a multimodal approach is indispensable for advanced cancer patients. Careful, oncological principle-based surgery, combined with a molecularly guided multidisciplinary approach, will open new horizons in surgical oncology. Copyright © 2017. Published by Elsevier Ltd.
Terminal spacecraft rendezvous and capture with LASSO model predictive control
NASA Astrophysics Data System (ADS)
Hartley, Edward N.; Gallieri, Marco; Maciejowski, Jan M.
2013-11-01
The recently investigated ℓasso model predictive control (MPC) is applied to the terminal phase of a spacecraft rendezvous and capture mission. The interaction between the cost function and the treatment of minimum impulse bit is also investigated. The propellant consumption with ℓasso MPC for the considered scenario is noticeably less than with a conventional quadratic cost and control actions are sparser in time. Propellant consumption and sparsity are competitive with those achieved using a zone-based ℓ1 cost function, whilst requiring fewer decision variables in the optimisation problem than the latter. The ℓasso MPC is demonstrated to meet tighter specifications on control precision and also avoids the risk of undesirable behaviours often associated with pure ℓ1 stage costs.
Barnard, D R; Knue, G J; Dickerson, C Z; Bernier, U R; Kline, D L
2011-06-01
Capture rates of insectary-reared female Aedes albopictus (Skuse), Anopheles quadrimaculatus Say, Culex nigripalpus Theobald, Culex quinquefasciatus Say and Aedes triseriatus (Say) in CDC-type light traps (LT) supplemented with CO2 and using the human landing (HL) collection method were observed in matched-pair experiments in outdoor screened enclosures. Mosquito responses were compared on a catch-per-unit-effort basis using regression analysis with LT and HL as the dependent and independent variables, respectively. The average number of mosquitoes captured in 1 min by LT over a 24-h period was significantly related to the average number captured in 1 min by HL only for Cx. nigripalpus and Cx. quinquefasciatus. Patterns of diel activity indicated by a comparison of the mean response to LT and HL at eight different times in a 24-h period were not superposable for any species. The capture rate efficiency of LT when compared with HL was ≤15% for all mosquitoes except Cx. quinquefasciatus (43%). Statistical models of the relationship between mosquito responses to each collection method indicate that, except for Ae. albopictus, LT and HL capture rates are significantly related only during certain times of the diel period. Estimates of mosquito activity based on observations made between sunset and sunrise were most precise in this regard for An. quadrimaculatus and Cx. nigripalpus, as were those between sunrise and sunset for Cx. quinquefasciatus and Ae. triseriatus.
Laser Capture Microdissection for Protein and NanoString RNA analysis
Golubeva, Yelena; Salcedo, Rosalba; Mueller, Claudius; Liotta, Lance A.; Espina, Virginia
2013-01-01
Laser capture microdissection (LCM) allows the precise procurement of enriched cell populations from a heterogeneous tissue, or live cell culture, under direct microscopic visualization. Histologically enriched cell populations can be procured by harvesting cells of interest directly, or isolating specific cells by ablating unwanted cells. The basic components of laser microdissection technology are a) visualization of cells via light microscopy, b) transfer of laser energy to a thermolabile polymer with either the formation of a polymer-cell composite (capture method) or transfer of laser energy via an ultraviolet laser to photovolatize a region of tissue (cutting method), and c) removal of cells of interest from the heterogeneous tissue section. The capture and cutting methods (instruments) for laser microdissection differ in the manner by which cells of interest are removed from the heterogeneous sample. Laser energy in the capture method is infrared (810nm), while in the cutting mode the laser is ultraviolet (355nm). Infrared lasers melt a thermolabile polymer that adheres to the cells of interest, whereas ultraviolet lasers ablate cells for either removal of unwanted cells or excision of a defined area of cells. LCM technology is applicable to an array of applications including mass spectrometry, DNA genotyping and loss-of-heterozygosity analysis, RNA transcript profiling, cDNA library generation, proteomics discovery, and signal kinase pathway profiling. This chapter describes laser capture microdissection using an ArcturusXT instrument for protein LCM sample analysis, and using a mmi CellCut Plus® instrument for RNA analysis via NanoString technology. PMID:23027006
Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K
2012-01-01
Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
NASA Astrophysics Data System (ADS)
Sathnur, Ashwini
2017-04-01
Validation of the Existing products of the Remote Sensing instruments Review Comment Number 1 Ground - based instruments and space - based instruments are available for remote sensing of the Volcanic eruptions. Review Comment Number 2 The sunlight spectrum appears over the volcanic geographic area. This sunlight is reflected with the image of the volcano geographic area, to the satellite. The satellite captures this emitted spectrum of the image and further calculates the occurrences of the volcanic eruption. Review Comment Number 3 This computation system derives the presence and detection of sulphur dioxide and Volcanic Ash in the emitted spectrum. The temperature of the volcanic region is also measured. If these inputs derive the possibility of occurrence of an eruption, then the data is manually captured by the system for further usage and hazard mitigation. Review Comment Number 4 The instrument is particularly important in capturing the volcanogenic signal. This capturing operation should be carried out during the appropriate time of the day. This is carried out ideally at the time of the day when the reflected image spectra is best available. Capturing the data is not advisable to be performed at the night time, as the sunlight spectra is at its minimum. This would lead to erroneous data interpretation, as there is no sunlight for reflection of the volcanic region. Thus leading to the least capture of the emitted light spectra. Review Comment Number 5 An ideal area coverage of the spectrometer is mandatory. This is basically for the purpose of capturing the right area of data, in order to precisely derive the occurrence of a volcanic eruption. The larger the spatial resolution, there would be a higher capture of the geographic region, and this would lead to a lesser precise data capture. This would lead to missing details in the data capture. Review Comment Number 6 Ideal qualities for the remote sensing instrument are mentioned below:- Minimum "false" positives. Cost - free data made available. Minimum band - width problem. Rapid communication system. Validation and Requirements of the New products of the Remote Sensing instruments The qualities of the existing products would be present in the new products also. Along with these qualities, newly devised additional qualities are also required in order to build an advanced remote sensing instrument. The new additional requirements are mentioned below:- Review Comment Number 1 Enlarging the spatial resolution so that the volcanic plumes erupting from the early volcanic eruption is captured by the remote sensing instrument. This spatial resolution data capture would involve better video and camera facilities on the remote sensing instrument. Review Comment Number 2 Capturing the traces of carbon, carbonic acid and water vapour, along with the existing product's capture of sulphur dioxide and volcanic Ash. Review Comment Number 3 Creating an additional module in the instrument to derive the functionality of forecasting a volcanic eruption. This new forecast model should be able to predict the occurrences of volcanic eruption several months in advance. This is basically to create mechanisms for providing early solutions to the problems of mitigation of volcanic hazards. Review Comment Number 4 Creating additional features in the remote sensing instrument to enable the automatic transfer of forecasted eruptions of volcanoes, to the disaster relief operations team. This transfer of information is to be performed automatically, without any request raised from the relief operations team, for the predicted forecast information. This is for the purpose of receiving the information at the right - time, thus eliminating any possibility of occurrences of errors during hazard management.
An automated framework for hypotheses generation using literature.
Abedi, Vida; Zand, Ramin; Yeasin, Mohammed; Faisal, Fazle Elahi
2012-08-29
In bio-medicine, exploratory studies and hypothesis generation often begin with researching existing literature to identify a set of factors and their association with diseases, phenotypes, or biological processes. Many scientists are overwhelmed by the sheer volume of literature on a disease when they plan to generate a new hypothesis or study a biological phenomenon. The situation is even worse for junior investigators who often find it difficult to formulate new hypotheses or, more importantly, corroborate if their hypothesis is consistent with existing literature. It is a daunting task to be abreast with so much being published and also remember all combinations of direct and indirect associations. Fortunately there is a growing trend of using literature mining and knowledge discovery tools in biomedical research. However, there is still a large gap between the huge amount of effort and resources invested in disease research and the little effort in harvesting the published knowledge. The proposed hypothesis generation framework (HGF) finds "crisp semantic associations" among entities of interest - that is a step towards bridging such gaps. The proposed HGF shares similar end goals like the SWAN but are more holistic in nature and was designed and implemented using scalable and efficient computational models of disease-disease interaction. The integration of mapping ontologies with latent semantic analysis is critical in capturing domain specific direct and indirect "crisp" associations, and making assertions about entities (such as disease X is associated with a set of factors Z). Pilot studies were performed using two diseases. A comparative analysis of the computed "associations" and "assertions" with curated expert knowledge was performed to validate the results. It was observed that the HGF is able to capture "crisp" direct and indirect associations, and provide knowledge discovery on demand. The proposed framework is fast, efficient, and robust in generating new hypotheses to identify factors associated with a disease. A full integrated Web service application is being developed for wide dissemination of the HGF. A large-scale study by the domain experts and associated researchers is underway to validate the associations and assertions computed by the HGF.
Habitat corridors function as both drift fences and movement conduits for dispersing flies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, Joanna H.; Levey Douglas J.; Hogsette, Jerome A.
2005-03-30
Abstract Corridors connect otherwise isolated habitat patches and can direct movement of animals among such patches. In eight experimental landscapes, we tested two hypotheses of how corridors might affect dispersal behavior. The Traditional Corridor hypothesis posits that animals preferentially leave patches via corridors, following them into adjacent patches. The Drift Fence hypothesis posits that animals dispersing through matrix habitat are diverted into patches with corridors because they follow corridors when encountered. House flies (Musca domestica L.), a species that prefers the habitat of our patches and corridors, were released in a central patch (100•100 m) and recaptured in peripheral patchesmore » that were or were not connected by a corridor. Flies were captured more frequently in connected than unconnected patches, thereby supporting the Traditional Corridor hypothesis. The Drift Fence hypothesis was also supported, as flies were captured more frequently in unconnected patches with blind (dead end) corridors than in unconnected patches of equal area without blind corridors. A second experiment tested whether these results might be dependent on the type of patch-matrix boundary encountered by dispersing flies and whether edge-following behavior might be the mechanism underlying the observed corridor effect in the first experiment. We recorded dispersal patterns of flies released along forest edges with dense undergrowth in the forest (‘‘closed’’ edges) and along edges with little forest understory (‘‘open’’ edges). Flies were less likely to cross and more likely to follow closed edges than open edges, indicating that when patch and corridor edges are pronounced, edge-following behavior of flies may direct them along corridors into connected patches. Because edges in the first experiment were open, these results also suggest that corridor effects for flies in that experiment would have been even stronger if the edges around the source patches and corridors had been more closed. Taken together, our results suggest that corridors can affect dispersal of organisms in unappreciated ways (i.e., as drift fences) and that edge type can alter dispersal behavior.« less
Social learning spreads knowledge about dangerous humans among American crows.
Cornell, Heather N; Marzluff, John M; Pecoraro, Shannon
2012-02-07
Individuals face evolutionary trade-offs between the acquisition of costly but accurate information gained firsthand and the use of inexpensive but possibly less reliable social information. American crows (Corvus brachyrhynchos) use both sources of information to learn the facial features of a dangerous person. We exposed wild crows to a novel 'dangerous face' by wearing a unique mask as we trapped, banded and released 7-15 birds at five study sites near Seattle, WA, USA. An immediate scolding response to the dangerous mask after trapping by previously captured crows demonstrates individual learning, while an immediate response by crows that were not captured probably represents conditioning to the trapping scene by the mob of birds that assembled during the capture. Later recognition of dangerous masks by lone crows that were never captured is consistent with horizontal social learning. Independent scolding by young crows, whose parents had conditioned them to scold the dangerous mask, demonstrates vertical social learning. Crows that directly experienced trapping later discriminated among dangerous and neutral masks more precisely than did crows that learned through social means. Learning enabled scolding to double in frequency and spread at least 1.2 km from the place of origin over a 5 year period at one site.
Estimation of population size using open capture-recapture models
McDonald, T.L.; Amstrup, Steven C.
2001-01-01
One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.
Precision electron-beam polarimetry at 1 GeV using diamond microstrip detectors
Narayan, A.; Jones, D.; Cornejo, J. C.; ...
2016-02-16
We report on the highest precision yet achieved in the measurement of the polarization of a low-energy, O(1 GeV), continuous-wave (CW) electron beam, accomplished using a new polarimeter based on electron-photon scattering, in Hall C at Jefferson Lab. A number of technical innovations were necessary, including a novel method for precise control of the laser polarization in a cavity and a novel diamond microstrip detector that was able to capture most of the spectrum of scattered electrons. The data analysis technique exploited track finding, the high granularity of the detector, and its large acceptance. The polarization of the 180–μA, 1.16-GeVmore » electron beam was measured with a statistical precision of <1% per hour and a systematic uncertainty of 0.59%. This exceeds the level of precision required by the Q weak experiment, a measurement of the weak vector charge of the proton. Proposed future low-energy experiments require polarization uncertainty < 0.4%, and this result represents an important demonstration of that possibility. This measurement is the first use of diamond detectors for particle tracking in an experiment. As a result, it demonstrates the stable operation of a diamond-based tracking detector in a high radiation environment, for two years.« less
Toward 1-mm depth precision with a solid state full-field range imaging system
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.
2006-02-01
Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.
Gale, Maggie; Ball, Linden J
2012-04-01
Hypothesis-testing performance on Wason's (Quarterly Journal of Experimental Psychology 12:129-140, 1960) 2-4-6 task is typically poor, with only around 20% of participants announcing the to-be-discovered "ascending numbers" rule on their first attempt. Enhanced solution rates can, however, readily be observed with dual-goal (DG) task variants requiring the discovery of two complementary rules, one labeled "DAX" (the standard "ascending numbers" rule) and the other labeled "MED" ("any other number triples"). Two DG experiments are reported in which we manipulated the usefulness of a presented MED exemplar, where usefulness denotes cues that can establish a helpful "contrast class" that can stand in opposition to the presented 2-4-6 DAX exemplar. The usefulness of MED exemplars had a striking facilitatory effect on DAX rule discovery, which supports the importance of contrast-class information in hypothesis testing. A third experiment ruled out the possibility that the useful MED triple seeded the correct rule from the outset and obviated any need for hypothesis testing. We propose that an extension of Oaksford and Chater's (European Journal of Cognitive Psychology 6:149-169, 1994) iterative counterfactual model can neatly capture the mechanisms by which DG facilitation arises.
Leivada, Evelina; Kambanaros, Maria; Grohmann, Kleanthes K.
2017-01-01
Grammatical markers are not uniformly impaired across speakers of different languages, even when speakers share a diagnosis and the marker in question is grammaticalized in a similar way in these languages. The aim of this work is to demarcate, from a cross-linguistic perspective, the linguistic phenotype of three genetically heterogeneous developmental disorders: specific language impairment, Down syndrome, and autism spectrum disorder. After a systematic review of linguistic profiles targeting mainly English-, Greek-, Catalan-, and Spanish-speaking populations with developmental disorders (n = 880), shared loci of impairment are identified and certain domains of grammar are shown to be more vulnerable than others. The distribution of impaired loci is captured by the Locus Preservation Hypothesis which suggests that specific parts of the language faculty are immune to impairment across developmental disorders. Through the Locus Preservation Hypothesis, a classical chicken and egg question can be addressed: Do poor conceptual resources and memory limitations result in an atypical grammar or does a grammatical breakdown lead to conceptual and memory limitations? Overall, certain morphological markers reveal themselves as highly susceptible to impairment, while syntactic operations are preserved, granting support to the first scenario. The origin of resilient syntax is explained from a phylogenetic perspective in connection to the “syntax-before-phonology” hypothesis. PMID:29081756
Can Outer Hair Cells Actively Pump Fluid into the Tunnel of Corti?
NASA Astrophysics Data System (ADS)
Zagadou, Brissi Franck; Mountain, David C.
2011-11-01
Non-classical models of the cochlear traveling wave have been introduced in attempt to capture the unique features of the cochlear amplifier (CA). These models include multiple modes of longitudinal coupling. In one approach, it is hypothesized that two wave modes can add their energies to create amplification such as that desired in the CA. The tunnel of Corti (ToC) was later used to represent the second wave mode for the proposed traveling wave amplifier model, and was incorporated in a multi-compartment cochlea model. The results led to the hypothesis that the CA functions as a fluid pump. However, this hypothesis must be consistent with the anatomical structure of the organ of Corti (OC). The fluid must pass between the outer pillar cells before reaching the ToC, and the ToC fluid and the underlying basilar membrane must constitute an appropriate waveguide. We have analyzed an anatomically based 3D finite element model of the ToC of the gerbil. Our results demonstrate that the OC structure is consistent with the hypothesis.
Age Mediation of Frontoparietal Activation during Visual Feature Search
Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-01-01
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420
The same-location cost is unrelated to attentional settings: an object-updating account.
Carmel, Tomer; Lamy, Dominique
2014-08-01
What mechanisms allow us to ignore salient yet irrelevant visual information has been a matter of intense debate. According to the contingent-capture hypothesis, such information is filtered out, whereas according to the salience-based account, it captures attention automatically. Several recent studies have reported a same-location cost that appears to fit neither of these accounts. These showed that responses may actually be slower when the target appears at the location just occupied by an irrelevant singleton distractor. Here, we investigated the mechanisms underlying this same-location cost. Our findings show that the same-location cost is unrelated to automatic attentional capture or strategic setting of attentional priorities, and therefore invalidate the feature-based inhibition and fast attentional disengagement accounts of this effect. In addition, we show that the cost is wiped out when the cue and target are not perceived as parts of the same object. We interpret these findings as indicating that the same-location cost has been previously misinterpreted by both bottom-up and top-down theories of attentional capture. We propose that it is better understood as a consequence of object updating, namely, as the cost of updating the information stored about an object when this object changes across time.
Engineering design knowledge recycling in near-real-time
NASA Technical Reports Server (NTRS)
Leifer, Larry; Baya, Vinod; Toye, George; Baudin, Catherine; Underwood, Jody Gevins
1994-01-01
It is hypothesized that the capture and reuse of machine readable design records is cost beneficial. This informal engineering notebook design knowledge can be used to model the artifact and the design process. Design rationale is, in part, preserved and available for examination. Redesign cycle time is significantly reduced (Baya et al, 1992). These factors contribute to making it less costly to capture and reuse knowledge than to recreate comparable knowledge (current practice). To test the hypothesis, we have focused on validation of the concept and tools in two 'real design' projects this past year: (1) a short (8 month) turnaround project for NASA life science bioreactor researchers was done by a team of three mechanical engineering graduate students at Stanford University (in a class, ME210abc 'Mechatronic Systems Design and Methodology' taught by one of the authors, Leifer); and (2) a long range (8 to 20 year) international consortium project for NASA's Space Science program (STEP: satellite test of the equivalence principle). Design knowledge capture was supported this year by assigning the use of a Team-Design PowerBook. Design records were cataloged in near-real time. These records were used to qualitatively model the artifact design as it evolved. Dedal, an 'intelligent librarian' developed at NASA-ARC, was used to navigate and retrieve captured knowledge for reuse.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
Three-dimensional particle tracking via tunable color-encoded multiplexing.
Duocastella, Martí; Theriault, Christian; Arnold, Craig B
2016-03-01
We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
NASA Astrophysics Data System (ADS)
Fox, J.; Saksena, S.; Spencer, J.; Finucane, M.; Sultana, N.
2012-12-01
Our overarching hypothesis is that new risks, in this case the H5N1 strain of avian influenza, emerge during transitions between stages of development. Moreover, these risks are not coincidental but occur precisely because of the in-between nature of the coupled human-natural system at the point when things are neither traditional nor modern but resemble the state of chaos, release and reorganization. We are testing this hypothesis in Vietnam using demographic, social, economic, and environmental data collected in national censuses and analyzed at commune and district levels to identify communes and districts that are traditional, modern, and transitional (peri-urban). Using data from the 2006 agricultural census that capture both the changing nature of the built environment (types of sanitation systems) and the loss of and diversification of agriculture systems (percent of households whose major source of income is from agriculture, and percent of land under agriculture, forests, and aquaculture), and a normalized difference vegetation index from 2006 Landsat images we created a national scale urbanicity map for Vietnam. Field work in the summer of 2011 showed this map to be an accurate (approximately 85%) approximation of traditional (rural), transitional (periurban), and modern (urban) communes. Preliminary results suggest that over 7% of the country's land area and roughly 15% of its population resides in periurban neighborhoods, and that these areas do have a statistically significant greater incidence of AVI as measured in chicken deaths than traditional and modern communes (Table 1). Transitional neighborhoods such as these force planners to ask two questions. To what extent does the dichotomy of urban/rural makes sense in the context of Vietnam, when large areas and parts of the population are caught between the two? Second, how can planners and policy makers effectively provide for basic public goods and services in these contexts?Classification of places in Vietnam based on agricultural income, toilets and land under agriculture, forests and aqua (homes and enterprises) and NDVIσ In general denser vegetation canopies have higher NDVI valueφ Simpson's index, a value of 0 indicates no diversity and a value of 1 indicated infinite diversity
Imprecision in the Era of Precision Medicine in Non-Small Cell Lung Cancer
Sundar, Raghav; Chénard-Poirier, Maxime; Collins, Dearbhaile Catherine; Yap, Timothy A.
2017-01-01
Over the past decade, major advances have been made in the management of advanced non-small cell lung cancer (NSCLC). There has been a particular focus on the identification and targeting of putative driver aberrations, which has propelled NSCLC to the forefront of precision medicine. Several novel molecularly targeted agents have now achieved regulatory approval, while many others are currently in late-phase clinical trial testing. These antitumor therapies have significantly impacted the clinical outcomes of advanced NSCLC and provided patients with much hope for the future. Despite this, multiple deficiencies still exist in our knowledge of this complex disease, and further research is urgently required to overcome these critical issues. This review traces the path undertaken by the different therapeutics assessed in NSCLC and the impact of precision medicine in this disease. We also discuss the areas of “imprecision” that still exist in NSCLC and the modern hypothesis-testing studies being conducted to address these key challenges. PMID:28443282
From the Nano- to the Macroscale - Bridging Scales for the Moving Contact Line Problem
NASA Astrophysics Data System (ADS)
Nold, Andreas; Sibley, David; Goddard, Benjamin; Kalliadasis, Serafim; Complex Multiscale Systems Team
2016-11-01
The moving contact line problem remains an unsolved fundamental problem in fluid mechanics. At the heart of the problem is its multiscale nature: a nanoscale region close to the solid boundary where the continuum hypothesis breaks down, must be resolved before effective macroscale parameters such as contact line friction and slip can be obtained. To capture nanoscale properties very close to the contact line and to establish a link to the macroscale behaviour, we employ classical density-functional theory (DFT), in combination with extended Navier-Stokes-like equations. Using simple models for viscosity and slip at the wall, we compare our computations with the Molecular Kinetic Theory, by extracting the contact line friction, depending on the imposed temperature of the fluid. A key fluid property captured by DFT is the fluid layering at the wall-fluid interface, which has a large effect on the shearing properties of a fluid. To capture this crucial property, we propose an anisotropic model for the viscosity, which also allows us to scrutinize the effect of fluid layering on contact line friction.
On the Role of the Ventromedial Prefrontal Cortex in Self-Processing: The Valuation Hypothesis
D’Argembeau, Arnaud
2013-01-01
With the development of functional neuroimaging, important progress has been made in identifying the brain regions involved in self-related processing. One of the most consistent findings has been that the ventromedial prefrontal cortex (vMPFC) is activated when people contemplate various aspects of themselves and their life, such their traits, experiences, preferences, abilities, and goals. Recent evidence suggests that this region may not support the act of self-reflection per se, but its precise function in self-processing remains unclear. In this article, I examine the hypothesis that the vMPFC may contribute to assign personal value or significance to self-related contents: stimuli and mental representations that refer or relate to the self tend to be assigned unique value or significance, and the function of the vMPFC may precisely be to evaluate or represent such significance. Although relatively few studies to date have directly tested this hypothesis, several lines of evidence converge to suggest that vMPFC activity during self-processing depends on the personal significance of self-related contents. First, increasing psychological distance from self-representations leads to decreased activation in the vMPFC. Second, the magnitude of vMPFC activation increases linearly with the personal importance attributed to self-representations. Third, the activity of the vMPFC is modulated by individual differences in the interest placed on self-reflection. Finally, the evidence shows that the vMPFC responds to outer aspects of self that have high personal value, such as possessions and close others. By assigning personal value to self-related contents, the vMPFC may play an important role in the construction, stabilization, and modification of self-representations, and ultimately in guiding our choices and decisions. PMID:23847521
NASA Astrophysics Data System (ADS)
Hu, Ye; Peng, Yang; Lin, Kevin; Shen, Haifa; Brousseau, Louis C., III; Sakamoto, Jason; Sun, Tong; Ferrari, Mauro
2011-02-01
Phosphorylated peptides and proteins play an important role in normal cellular activities, e.g., gene expression, mitosis, differentiation, proliferation, and apoptosis, as well as tumor initiation, progression and metastasis. However, technical hurdles hinder the use of common fractionation methods to capture phosphopeptides from complex biological fluids such as human sera. Herein, we present the development of a dual strategy material that offers enhanced capture of low molecular weight phosphoproteins: mesoporous silica thin films with precisely engineered pore sizes that sterically select for molecular size combined with chemically selective surface modifications (i.e. Ga3+, Ti4+ and Zr4+) that target phosphoroproteins. These materials provide high reproducibility (CV = 18%) and increase the stability of the captured proteins by excluding degrading enzymes, such as trypsin. The chemical and physical properties of the composite mesoporous thin films were characterized by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy, energy dispersive X-ray spectroscopy and ellipsometry. Using mass spectroscopy and biostatistics analysis, the enrichment efficiency of different metal ions immobilized on mesoporous silica chips was investigated. The novel technology reported provides a platform capable of efficiently profiling the serum proteome for biomarker discovery, forensic sampling, and routine diagnostic applications.Phosphorylated peptides and proteins play an important role in normal cellular activities, e.g., gene expression, mitosis, differentiation, proliferation, and apoptosis, as well as tumor initiation, progression and metastasis. However, technical hurdles hinder the use of common fractionation methods to capture phosphopeptides from complex biological fluids such as human sera. Herein, we present the development of a dual strategy material that offers enhanced capture of low molecular weight phosphoproteins: mesoporous silica thin films with precisely engineered pore sizes that sterically select for molecular size combined with chemically selective surface modifications (i.e. Ga3+, Ti4+ and Zr4+) that target phosphoroproteins. These materials provide high reproducibility (CV = 18%) and increase the stability of the captured proteins by excluding degrading enzymes, such as trypsin. The chemical and physical properties of the composite mesoporous thin films were characterized by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy, energy dispersive X-ray spectroscopy and ellipsometry. Using mass spectroscopy and biostatistics analysis, the enrichment efficiency of different metal ions immobilized on mesoporous silica chips was investigated. The novel technology reported provides a platform capable of efficiently profiling the serum proteome for biomarker discovery, forensic sampling, and routine diagnostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c0nr00720j
Flavor network and the principles of food pairing
NASA Astrophysics Data System (ADS)
Ahn, Yong-Yeol; Ahnert, Sebastian; Bagrow, James; Barabasi, Albert-Laszlo
2011-03-01
We construct and investigate a flavor network capturing the chemical similarity between the culinary ingredients. We found that Western cuisines have a statistically significant tendency to use ingredient pairs that share many flavor compounds, in line with the food pairing hypothesis used by some chefs and molecular gastronmists. By contrast, East Asian cuisine tend to avoid compound sharing ingredients. We identify key ingredients in each cuisine that help us to explore the differences and similarities between regional cuisines.
Swimming Speed of Larval Snail Does Not Correlate with Size and Ciliary Beat Frequency
Chan, Kit Yu Karen; Jiang, Houshuo; Padilla, Dianna K.
2013-01-01
Many marine invertebrates have planktonic larvae with cilia used for both propulsion and capturing of food particles. Hence, changes in ciliary activity have implications for larval nutrition and ability to navigate the water column, which in turn affect survival and dispersal. Using high-speed high-resolution microvideography, we examined the relationship between swimming speed, velar arrangements, and ciliary beat frequency of freely swimming veliger larvae of the gastropod Crepidula fornicata over the course of larval development. Average swimming speed was greatest 6 days post hatching, suggesting a reduction in swimming speed towards settlement. At a given age, veliger larvae have highly variable speeds (0.8–4 body lengths s−1) that are independent of shell size. Contrary to the hypothesis that an increase in ciliary beat frequency increases work done, and therefore speed, there was no significant correlation between swimming speed and ciliary beat frequency. Instead, there are significant correlations between swimming speed and visible area of the velar lobe, and distance between centroids of velum and larval shell. These observations suggest an alternative hypothesis that, instead of modifying ciliary beat frequency, larval C. fornicata modify swimming through adjustment of velum extension or orientation. The ability to adjust velum position could influence particle capture efficiency and fluid disturbance and help promote survival in the plankton. PMID:24367554
Nurse-patient assignment models considering patient acuity metrics and nurses' perceived workload.
Sir, Mustafa Y; Dundar, Bayram; Barker Steege, Linsey M; Pasupathy, Kalyan S
2015-06-01
Patient classification systems (PCSs) are commonly used in nursing units to assess how many nursing care hours are needed to care for patients. These systems then provide staffing and nurse-patient assignment recommendations for a given patient census based on these acuity scores. Our hypothesis is that such systems do not accurately capture workload and we conduct an experiment to test this hypothesis. Specifically, we conducted a survey study to capture nurses' perception of workload in an inpatient unit. Forty five nurses from oncology and surgery units completed the survey and rated the impact of patient acuity indicators on their perceived workload using a six-point Likert scale. These ratings were used to calculate a workload score for an individual nurse given a set of patient acuity indicators. The approach offers optimization models (prescriptive analytics), which use patient acuity indicators from a commercial PCS as well as a survey-based nurse workload score. The models assign patients to nurses in a balanced manner by distributing acuity scores from the PCS and survey-based perceived workload. Numerical results suggest that the proposed nurse-patient assignment models achieve a balanced assignment and lower overall survey-based perceived workload compared to the assignment based solely on acuity scores from the PCS. This results in an improvement of perceived workload that is upwards of five percent. Copyright © 2015 Elsevier Inc. All rights reserved.
Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...
2015-10-06
Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less
Way-finding in displaced clock-shifted bees proves bees use a cognitive map.
Cheeseman, James F; Millar, Craig D; Greggers, Uwe; Lehmann, Konstantin; Pawley, Matthew D M; Gallistel, Charles R; Warman, Guy R; Menzel, Randolf
2014-06-17
Mammals navigate by means of a metric cognitive map. Insects, most notably bees and ants, are also impressive navigators. The question whether they, too, have a metric cognitive map is important to cognitive science and neuroscience. Experimentally captured and displaced bees often depart from the release site in the compass direction they were bent on before their capture, even though this no longer heads them toward their goal. When they discover their error, however, the bees set off more or less directly toward their goal. This ability to orient toward a goal from an arbitrary point in the familiar environment is evidence that they have an integrated metric map of the experienced environment. We report a test of an alternative hypothesis, which is that all the bees have in memory is a collection of snapshots that enable them to recognize different landmarks and, associated with each such snapshot, a sun-compass-referenced home vector derived from dead reckoning done before and after previous visits to the landmark. We show that a large shift in the sun-compass rapidly induced by general anesthesia does not alter the accuracy or speed of the homeward-oriented flight made after the bees discover the error in their initial postrelease flight. This result rules out the sun-referenced home-vector hypothesis, further strengthening the now extensive evidence for a metric cognitive map in bees.
Way-finding in displaced clock-shifted bees proves bees use a cognitive map
Cheeseman, James F.; Millar, Craig D.; Greggers, Uwe; Lehmann, Konstantin; Pawley, Matthew D. M.; Gallistel, Charles R.; Warman, Guy R.; Menzel, Randolf
2014-01-01
Mammals navigate by means of a metric cognitive map. Insects, most notably bees and ants, are also impressive navigators. The question whether they, too, have a metric cognitive map is important to cognitive science and neuroscience. Experimentally captured and displaced bees often depart from the release site in the compass direction they were bent on before their capture, even though this no longer heads them toward their goal. When they discover their error, however, the bees set off more or less directly toward their goal. This ability to orient toward a goal from an arbitrary point in the familiar environment is evidence that they have an integrated metric map of the experienced environment. We report a test of an alternative hypothesis, which is that all the bees have in memory is a collection of snapshots that enable them to recognize different landmarks and, associated with each such snapshot, a sun-compass–referenced home vector derived from dead reckoning done before and after previous visits to the landmark. We show that a large shift in the sun-compass rapidly induced by general anesthesia does not alter the accuracy or speed of the homeward-oriented flight made after the bees discover the error in their initial postrelease flight. This result rules out the sun-referenced home-vector hypothesis, further strengthening the now extensive evidence for a metric cognitive map in bees. PMID:24889633
Privacy versus autonomy: a tradeoff model for smart home monitoring technologies.
Townsend, Daphne; Knoefel, Frank; Goubran, Rafik
2011-01-01
Smart homes are proposed as a new location for the delivery of healthcare services. They provide healthcare monitoring and communication services, by using integrated sensor network technologies. We validate a hypothesis regarding older adults' adoption of home monitoring technologies by conducting a literature review of articles studying older adults' attitudes and perceptions of sensor technologies. Using current literature to support the hypothesis, this paper applies the tradeoff model to decisions about sensor acceptance. Older adults are willing to trade privacy (by accepting a monitoring technology), for autonomy. As the information captured by the sensor becomes more intrusive and the infringement on privacy increases, sensors are accepted if the loss in privacy is traded for autonomy. Even video cameras, the most intrusive sensor type were accepted in exchange for the height of autonomy which is to remain in the home.
Debats, Nienke B.; Kingma, Idsart; Beek, Peter J.; Smeets, Jeroen B. J.
2012-01-01
How does the magnitude of the exploration force influence the precision of haptic perceptual estimates? To address this question, we examined the perceptual precision for moment of inertia (i.e., an object's “angular mass”) under different force conditions, using the Weber fraction to quantify perceptual precision. Participants rotated a rod around a fixed axis and judged its moment of inertia in a two-alternative forced-choice task. We instructed different levels of exploration force, thereby manipulating the magnitude of both the exploration force and the angular acceleration. These are the two signals that are needed by the nervous system to estimate moment of inertia. Importantly, one can assume that the absolute noise on both signals increases with an increase in the signals' magnitudes, while the relative noise (i.e., noise/signal) decreases with an increase in signal magnitude. We examined how the perceptual precision for moment of inertia was affected by this neural noise. In a first experiment we found that a low exploration force caused a higher Weber fraction (22%) than a high exploration force (13%), which suggested that the perceptual precision was constrained by the relative noise. This hypothesis was supported by the result of a second experiment, in which we found that the relationship between exploration force and Weber fraction had a similar shape as the theoretical relationship between signal magnitude and relative noise. The present study thus demonstrated that the amount of force used to explore an object can profoundly influence the precision by which its properties are perceived. PMID:23028437
High-precision half-life measurements of the T =1 /2 mirror β decays 17F and 33Cl
NASA Astrophysics Data System (ADS)
Grinyer, J.; Grinyer, G. F.; Babo, M.; Bouzomita, H.; Chauveau, P.; Delahaye, P.; Dubois, M.; Frigot, R.; Jardin, P.; Leboucher, C.; Maunoury, L.; Seiffert, C.; Thomas, J. C.; Traykov, E.
2015-10-01
Background: Measurements of the f t values for T =1 /2 mirror β+ decays offer a method to test the conserved vector current hypothesis and to determine Vud, the up-down matrix element of the Cabibbo-Kobayashi-Maskawa matrix. In most mirror decays used for these tests, uncertainties in the f t values are dominated by the uncertainties in the half-lives. Purpose: Two precision half-life measurements were performed for the T =1 /2 β+ emitters, 17F and 33Cl, in order to eliminate the half-life as the leading source of uncertainty in their f t values. Method: Half-lives of 17F and 33Cl were determined using β counting of implanted radioactive ion beam samples on a moving tape transport system at the Système de Production d'Ions Radioactifs Accélérés en Ligne low-energy identification station at the Grand Accélérateur National d'Ions Lourds. Results: The 17F half-life result, 64.347 (35) s, precise to ±0.05 % , is a factor of 5 times more precise than the previous world average. The half-life of 33Cl was determined to be 2.5038 (22) s. The current precision of ±0.09 % is nearly 2 times more precise compared to the previous world average. Conclusions: The precision achieved during the present measurements implies that the half-life no longer dominates the uncertainty of the f t values for both T =1 /2 mirror decays 17F and 33Cl.
From quantum transitions to electronic motions
NASA Astrophysics Data System (ADS)
Krausz, Ferenc
2017-01-01
Laser spectroscopy and chromoscopy permit precision measurement of quantum transitions and captures atomic-scale dynamics, respectively. Frequency- and time-domain metrology ranks among the supreme laser disciplines in fundamental science. For decades, these fields evolved independently, without interaction and synergy between them. This has changed profoundly with controlling the position of the equidistant frequency spikes of a mode-locked laser oscillator. By the self-referencing technique invented by Theodor Hänsch, the comb can be coherently linked to microwaves and used for precision measurements of energy differences between quantum states. The resultant optical frequency synthesis has revolutionized precision spectroscopy. Locking the comb lines to the resonator round-trip frequency by the same approach has given rise to laser pulses with controlled field oscillations. This article reviews, from a personal perspective, how the bridge between frequency- and time-resolved metrology emerged on the turn of the millennium and how synthesized several-cycle laser fields have been instrumental in establishing the basic tools and techniques for attosecond science.
NASA Astrophysics Data System (ADS)
Gorringe, T. P.; Hertzog, D. W.
2015-09-01
The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.
ERIC Educational Resources Information Center
Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.
2017-01-01
Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…
Discrete mathematics, formal methods, the Z schema and the software life cycle
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.
Iles, William J.D.; Barrett, Craig F.; Smith, Selena Y.; Specht, Chelsea D.
2016-01-01
The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths. PMID:26819846
Sass, Chodon; Iles, William J D; Barrett, Craig F; Smith, Selena Y; Specht, Chelsea D
2016-01-01
The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths.
Chae, Jungseok; An, Sangmin; Ramer, Georg; ...
2017-08-03
The atomic force microscope (AFM) offers a rich observation window on the nanoscale, yet many dynamic phenomena are too fast and too weak for direct AFM detection. Integrated cavity-optomechanics is revolutionizing micromechanical sensing; however, it has not yet impacted AFM. Here, we make a groundbreaking advance by fabricating picogram-scale probes integrated with photonic resonators to realize functional AFM detection that achieve high temporal resolution (<10 ns) and picometer vertical displacement uncertainty simultaneously. The ability to capture fast events with high precision is leveraged to measure the thermal conductivity (η), for the first time, concurrently with chemical composition at the nanoscalemore » in photothermal induced resonance experiments. The intrinsic η of metal–organic-framework individual microcrystals, not measurable by macroscale techniques, is obtained with a small measurement uncertainty (8%). The improved sensitivity (50×) increases the measurement throughput 2500-fold and enables chemical composition measurement of molecular monolayer-thin samples. In conclusion, our paradigm-shifting photonic readout for small probes breaks the common trade-off between AFM measurement precision and ability to capture transient events, thus transforming the ability to observe nanoscale dynamics in materials.« less
[Reliability of iWitness photogrammetry in maxillofacial application].
Jiang, Chengcheng; Song, Qinggao; He, Wei; Chen, Shang; Hong, Tao
2015-06-01
This study aims to test the accuracy and precision of iWitness photogrammetry for measuring the facial tissues of mannequin head. Under ideal circumstances, the 3D landmark coordinates were repeatedly obtained from a mannequin head using iWitness photogrammetric system with different parameters, to examine the precision of this system. The differences between the 3D data and their true distance values of mannequin head were computed. Operator error of 3D system in non-zoom and zoom status were 0.20 mm and 0.09 mm, and the difference was significant (P 0.05). Image captured error of 3D system was 0.283 mm, and there was no significant difference compared with the same group of images (P>0.05). Error of 3D systen with recalibration was 0.251 mm, and the difference was not statistically significant compared with image captured error (P>0.05). Good congruence was observed between means derived from the 3D photos and direct anthropometry, with difference ranging from -0.4 mm to +0.4 mm. This study provides further evidence of the high reliability of iWitness photogrammetry for several craniofacial measurements, including landmarks and inter-landmark distances. The evaluated system can be recommended for the evaluation and documentation of the facial surface.
Fast estimation of space-robots inertia parameters: A modular mathematical formulation
NASA Astrophysics Data System (ADS)
Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2016-10-01
This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chae, Jungseok; An, Sangmin; Ramer, Georg
The atomic force microscope (AFM) offers a rich observation window on the nanoscale, yet many dynamic phenomena are too fast and too weak for direct AFM detection. Integrated cavity-optomechanics is revolutionizing micromechanical sensing; however, it has not yet impacted AFM. Here, we make a groundbreaking advance by fabricating picogram-scale probes integrated with photonic resonators to realize functional AFM detection that achieve high temporal resolution (<10 ns) and picometer vertical displacement uncertainty simultaneously. The ability to capture fast events with high precision is leveraged to measure the thermal conductivity (η), for the first time, concurrently with chemical composition at the nanoscalemore » in photothermal induced resonance experiments. The intrinsic η of metal–organic-framework individual microcrystals, not measurable by macroscale techniques, is obtained with a small measurement uncertainty (8%). The improved sensitivity (50×) increases the measurement throughput 2500-fold and enables chemical composition measurement of molecular monolayer-thin samples. In conclusion, our paradigm-shifting photonic readout for small probes breaks the common trade-off between AFM measurement precision and ability to capture transient events, thus transforming the ability to observe nanoscale dynamics in materials.« less
Virtual Environments Using Video Capture for Social Phobia with Psychosis
White, Richard; Clarke, Timothy; Turner, Ruth; Fowler, David
2013-01-01
Abstract A novel virtual environment (VE) system was developed and used as an adjunct to cognitive behavior therapy (CBT) with six socially anxious patients recovering from psychosis. The novel aspect of the VE system is that it uses video capture so the patients can see a life-size projection of themselves interacting with a specially scripted and digitally edited filmed environment played in real time on a screen in front of them. Within-session process outcomes (subjective units of distress and belief ratings on individual behavioral experiments), as well as patient feedback, generated the hypothesis that this type of virtual environment can potentially add value to CBT by helping patients understand the role of avoidance and safety behaviors in the maintenance of social anxiety and paranoia and by boosting their confidence to carry out “real-life” behavioral experiments. PMID:23659722
Effects of tag loss on direct estimates of population growth rate
Rotella, J.J.; Hines, J.E.
2005-01-01
The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).
Large numbers hypothesis. II - Electromagnetic radiation
NASA Technical Reports Server (NTRS)
Adams, P. J.
1983-01-01
This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.
The Universal Plausibility Metric (UPM) & Principle (UPP).
Abel, David L
2009-12-03
Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of xi is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (xi < 1).
Dairy farmers with larger herd sizes adopt more precision dairy technologies.
Gargiulo, J I; Eastwood, C R; Garcia, S C; Lyons, N A
2018-06-01
An increase in the average herd size on Australian dairy farms has also increased the labor and animal management pressure on farmers, thus potentially encouraging the adoption of precision technologies for enhanced management control. A survey was undertaken in 2015 in Australia to identify the relationship between herd size, current precision technology adoption, and perception of the future of precision technologies. Additionally, differences between farmers and service providers in relation to perception of future precision technology adoption were also investigated. Responses from 199 dairy farmers, and 102 service providers, were collected between May and August 2015 via an anonymous Internet-based questionnaire. Of the 199 dairy farmer responses, 10.4% corresponded to farms that had fewer than 150 cows, 37.7% had 151 to 300 cows, 35.5% had 301 to 500 cows; 6.0% had 501 to 700 cows, and 10.4% had more than 701 cows. The results showed that farmers with more than 500 cows adopted between 2 and 5 times more specific precision technologies, such as automatic cup removers, automatic milk plant wash systems, electronic cow identification systems and herd management software, when compared with smaller farms. Only minor differences were detected in perception of the future of precision technologies between either herd size or farmers and service providers. In particular, service providers expected a higher adoption of automatic milking and walk over weighing systems than farmers. Currently, the adoption of precision technology has mostly been of the type that reduces labor needs; however, respondents indicated that by 2025 adoption of data capturing technology for monitoring farm system parameters would be increased. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Development and Validation of GC-ECD Method for the Determination of Metamitron in Soil
Tandon, Shishir; Kumar, Satyendra; Sand, N. K.
2015-01-01
This paper aims at developing and validating a convenient, rapid, and sensitive method for estimation of metamitron from soil samples.Determination andquantification was carried out by Gas Chromatography on microcapillary column with an Electron Capture Detector source. The compound was extracted from soil using methanol and cleanup by C-18 SPE. After optimization, the method was validated by evaluating the analytical curves, linearity, limits of detection, and quantification, precision (repeatability and intermediate precision), and accuracy (recovery). Recovery values ranged from 89 to 93.5% within 0.05- 2.0 µg L−1 with average RSD 1.80%. The precision (repeatability) ranged from 1.7034 to 1.9144% and intermediate precision from 1.5685 to 2.1323%. Retention time was 6.3 minutes, and minimum detectable and quantifiable limits were 0.02 ng mL−1 and 0.05 ng g−1, respectively. Good linearity (R 2 = 0.998) of the calibration curves was obtained over the range from 0.05 to 2.0 µg L−1. Results indicated that the developed method is rapid and easy to perform, making it applicable for analysis in large pesticide monitoring programmes. PMID:25733978
NASA Astrophysics Data System (ADS)
Kawaguchi, Yuko; Sugino, Tomohiro; Yang, Yinjie; Yoshimura, Yoshitaka; Tsuji, Takashi; Kobayashi, Kensei; Tabata, Makoto; Hashimoto, Hirofumi; Mita, Hajime; Imai, Eiichi; Kawai, Hideyuki; Okudaira, Kyoko; Hasegawa, Sunao; Yamashita, Masamichi; Yano, Hajime; Yokobori, Shin-Ichi; Yamagishi, Akihiko
Terrestrial life may fly off into outer space by volcanic eruption meteorological impacts, and so on. Microbes have been collected from high altitude up to 70 km since 1936 [1]. We also isolated microbes at high altitude up to 35 km using an airplane and balloons [2, 3]. The two isolates of these microbes are new species, one of which shows higher UV ray tolerance than Deinococcus radiodurans [2, 3]. On the other hand, there is a hypothesis on the origin of terrestrial life called panspermia [4, 5], in which terrestrial life is thought to have come from space (or astronomical bodies other than Earth). This hypothesis suggests that life may migrate between Earth and other planets. If microbes were to exist at the high altitude of low earth orbit (400 km), it would endorse the possibility of interplanetary migration of terrestrial lifeWe proposed, the Tanpopo mission to examine interplanetary migration of microbes and organic compounds on Japan Experimental Module (JEM) of the International Space Station (ISS). We will capture micro-particles including microbes and micro-meteoroids at the altitude of ISS orbit (400 km) with ultra low-density aerogel exposed to space for a given period of time. After retreaving the aerogel, we will investigate captured micro particles and tracks followed by microbiological, organic chemical and mineralogical analyses.Captured particles will be analyzed after the initial curation of the aerogel and tracks. Particles potentially containing microbes will be used for PCR amplification of small subunit (ss) rRNA gene followed by DNA sequencing. Comparision between the determined sequences and known ss rRNA gene sequences of terrestrial organisms will suggest the origin and properties of the organism.The density of microbes at the ISS altitude might be quite low, and microbe cell number on each captured particle may be quite limited. Therefore, it is necessary to establish the effective PCR procedure for quite small amount of DNA template in the presence of other materials such as clay and aerogel.We will report current status of the PCR identification of microbes from test samples. The PCR conditions to amplify ss rRNA gene from quite small number of cells and quite low concentration of genomic DNA with/without clay and aerogel are examined. References. [1] Y. Yang et al. (2009) Biol. Sci. in Space, 23, 151-163[2] Y. Yang et al. (2009) Int. J. Syst. Evol. Microbiol, 59, 1862-1866[3] Y. Yang et al. (2010) Int. J. Syst. Evol. Microbiol, (in press)[4] S. Arrhenius (1908) Worlds in the Making-the Evolution of the Universe. Harper and Brothers Publishers.[5] F. Crick (1981) Life Itself. Simon Schuster.
Apfelbeck, Beate; Helm, Barbara; Illera, Juan Carlos; Mortega, Kim G; Smiddy, Patrick; Evans, Neil P
2017-05-22
Latitudinal variation in avian life histories falls along a slow-fast pace of life continuum: tropical species produce small clutches, but have a high survival probability, while in temperate species the opposite pattern is found. This study investigated whether differential investment into reproduction and survival of tropical and temperate species is paralleled by differences in the secretion of the vertebrate hormone corticosterone (CORT). Depending on circulating concentrations, CORT can both act as a metabolic (low to medium levels) and a stress hormone (high levels) and, thereby, influence reproductive decisions. Baseline and stress-induced CORT was measured across sequential stages of the breeding season in males and females of closely related taxa of stonechats (Saxicola spp) from a wide distribution area. We compared stonechats from 13 sites, representing Canary Islands, European temperate and East African tropical areas. Stonechats are highly seasonal breeders at all these sites, but vary between tropical and temperate regions with regard to reproductive investment and presumably also survival. In accordance with life-history theory, during parental stages, post-capture (baseline) CORT was overall lower in tropical than in temperate stonechats. However, during mating stages, tropical males had elevated post-capture (baseline) CORT concentrations, which did not differ from those of temperate males. Female and male mates of a pair showed correlated levels of post-capture CORT when sampled after simulated territorial intrusions. In contrast to the hypothesis that species with low reproduction and high annual survival should be more risk-sensitive, tropical stonechats had lower stress-induced CORT concentrations than temperate stonechats. We also found relatively high post-capture (baseline) and stress-induced CORT concentrations, in slow-paced Canary Islands stonechats. Our data support and refine the view that baseline CORT facilitates energetically demanding activities in males and females and reflects investment into reproduction. Low parental workload was associated with lower post-capture (baseline) CORT as expected for a slow pace of life in tropical species. On a finer resolution, however, this tropical-temperate contrast did not generally hold. Post-capture (baseline) CORT was higher during mating stages in particular in tropical males, possibly to support the energetic needs of mate-guarding. Counter to predictions based on life history theory, our data do not confirm the hypothesis that long-lived tropical populations have higher stress-induced CORT concentrations than short-lived temperate populations. Instead, in the predator-rich tropical environments of African stonechats, a dampened stress response during parental stages may increase survival probabilities of young. Overall our data further support an association between life history and baseline CORT, but challenge the role of stress-induced CORT as a mediator of tropical-temperate variation in life history.
Statistical inference for capture-recapture experiments
Pollock, Kenneth H.; Nichols, James D.; Brownie, Cavell; Hines, James E.
1990-01-01
This monograph presents a detailed, practical exposition on the design, analysis, and interpretation of capture-recapture studies. The Lincoln-Petersen model (Chapter 2) and the closed population models (Chapter 3) are presented only briefly because these models have been covered in detail elsewhere. The Jolly- Seber open population model, which is central to the monograph, is covered in detail in Chapter 4. In Chapter 5 we consider the "enumeration" or "calendar of captures" approach, which is widely used by mammalogists and other vertebrate ecologists. We strongly recommend that it be abandoned in favor of analyses based on the Jolly-Seber model. We consider 2 restricted versions of the Jolly-Seber model. We believe the first of these, which allows losses (mortality or emigration) but not additions (births or immigration), is likely to be useful in practice. Another series of restrictive models requires the assumptions of a constant survival rate or a constant survival rate and a constant capture rate for the duration of the study. Detailed examples are given that illustrate the usefulness of these restrictions. There often can be a substantial gain in precision over Jolly-Seber estimates. In Chapter 5 we also consider 2 generalizations of the Jolly-Seber model. The temporary trap response model allows newly marked animals to have different survival and capture rates for 1 period. The other generalization is the cohort Jolly-Seber model. Ideally all animals would be marked as young, and age effects considered by using the Jolly-Seber model on each cohort separately. In Chapter 6 we present a detailed description of an age-dependent Jolly-Seber model, which can be used when 2 or more identifiable age classes are marked. In Chapter 7 we present a detailed description of the "robust" design. Under this design each primary period contains several secondary sampling periods. We propose an estimation procedure based on closed and open population models that allows for heterogeneity and trap response of capture rates (hence the name robust design). We begin by considering just 1 age class and then extend to 2 age classes. When there are 2 age classes it is possible to distinguish immigrants and births. In Chapter 8 we give a detailed discussion of the design of capture-recapture studies. First, capture-recapture is compared to other possible sampling procedures. Next, the design of capture-recapture studies to minimize assumption violations is considered. Finally, we consider the precision of parameter estimates and present figures on proportional standard errors for a variety of initial parameter values to aid the biologist about to plan a study. A new program, JOLLY, has been written to accompany the material on the Jolly-Seber model (Chapter 4) and its extensions (Chapter 5). Another new program, JOLLYAGE, has been written for a special case of the age-dependent model (Chapter 6) where there are only 2 age classes. In Chapter 9 a brief description of the different versions of the 2 programs is given. Chapter 10 gives a brief description of some alternative approaches that were not considered in this monograph. We believe that an excellent overall view of capture- recapture models may be obtained by reading the monograph by White et al. (1982) emphasizing closed models and then reading this monograph where we concentrate on open models. The important recent monograph by Burnham et al. (1987) could then be read if there were interest in the comparison of different populations.
Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D
2014-04-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Jung, R.E.; Droege, S.; Sauer, J.R.; Landy, R.B.
2000-01-01
In response to concerns about amphibian declines, a study evaluating and validating amphibian monitoring techniques was initiated in Shenandoah and Big Bend National Parks in the spring of 1998. We evaluate precision, bias, and efficiency of several sampling methods for terrestrial and streamside salamanders in Shenandoah National Park and assess salamander abundance in relation to environmental variables, notably soil and water pH. Terrestrial salamanders, primarily redback salamanders (Plethodon cinereus), were sampled by searching under cover objects during the day in square plots (10 to 35 m2). We compared population indices (mean daily and total counts) with adjusted population estimates from capture-recapture. Analyses suggested that the proportion of salamanders detected (p) during sampling varied among plots, necessitating the use of adjusted population estimates. However, adjusted population estimates were less precise than population indices, and may not be efficient in relating salamander populations to environmental variables. In future sampling, strategic use of capture-recapture to verify consistency of p's among sites may be a reasonable compromise between the possibility of bias in estimation of population size and deficiencies due to inefficiency associated with the estimation of p. The streamside two-lined salamander (Eurycea bislineata) was surveyed using four methods: leaf litter refugia bags, 1 m2 quadrats, 50 x 1 m visual encounter transects, and electric shocking. Comparison of survey methods at nine streams revealed congruent patterns of abundance among sites, suggesting that relative bias among the methods is similar, and that choice of survey method should be based on precision and logistical efficiency. Redback and two-lined salamander abundance were not significantly related to soil or water pH, respectively.
Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.
2014-01-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
NASA Astrophysics Data System (ADS)
Vasilopoulos, G.; Leyland, J.; Nield, J. M.
2016-12-01
Plants function as large-scale, flexible obstacles that exert additional drag on water flows, affecting local scale turbulence and the structure of the boundary layer. Hence, vegetation plays a significant role controlling surface water flows and modulating geomorphic change. This makes it an important, but often under considered, component when undertaking flood or erosion control actions, or designing river restoration strategies. Vegetative drag varies depending on flow conditions and the associated vegetation structure and temporary reconfiguration of the plant. Whilst several approaches have been developed to describe this relationship, they have been limited due to the difficulty of accurately and precisely characterising the vegetation itself, especially when it is submerged in flow. In practice, vegetative drag is commonly expressed through bulk parameters that are typically derived from lookup tables. Terrestrial Laser Scanning (TLS) has the ability to capture the surface of in situ objects as 3D point clouds, at high resolution (mm), precision and accuracy, even when submerged in water. This allows for the development of workflows capable of quantifying vegetation structure in 3D from dense TLS point cloud data. A physical modelling experiment investigated the impact of a series of structurally variable plants on flow at three different velocities. Acoustic Doppler Velocimetry (ADV) was employed to measure the velocity field and the corresponding fluvial drag of the vegetation was estimated using a bulk roughness function calculated from precise measurements of the water surface slope. Simultaneously, through-water TLS was employed to capture snapshots of plant deformation and distinguish plant structure during flow, using a porosity approach. Although plant type is important, we find a good relationship between plant structure, drag and adjustments of the velocity field.
Alvarez, Stéphanie; Timler, Carl J.; Michalscheck, Mirja; Paas, Wim; Descheemaeker, Katrien; Tittonell, Pablo; Andersson, Jens A.; Groot, Jeroen C. J.
2018-01-01
Creating typologies is a way to summarize the large heterogeneity of smallholder farming systems into a few farm types. Various methods exist, commonly using statistical analysis, to create these typologies. We demonstrate that the methodological decisions on data collection, variable selection, data-reduction and clustering techniques can bear a large impact on the typology results. We illustrate the effects of analysing the diversity from different angles, using different typology objectives and different hypotheses, on typology creation by using an example from Zambia’s Eastern Province. Five separate typologies were created with principal component analysis (PCA) and hierarchical clustering analysis (HCA), based on three different expert-informed hypotheses. The greatest overlap between typologies was observed for the larger, wealthier farm types but for the remainder of the farms there were no clear overlaps between typologies. Based on these results, we argue that the typology development should be guided by a hypothesis on the local agriculture features and the drivers and mechanisms of differentiation among farming systems, such as biophysical and socio-economic conditions. That hypothesis is based both on the typology objective and on prior expert knowledge and theories of the farm diversity in the study area. We present a methodological framework that aims to integrate participatory and statistical methods for hypothesis-based typology construction. This is an iterative process whereby the results of the statistical analysis are compared with the reality of the target population as hypothesized by the local experts. Using a well-defined hypothesis and the presented methodological framework, which consolidates the hypothesis through local expert knowledge for the creation of typologies, warrants development of less subjective and more contextualized quantitative farm typologies. PMID:29763422
Modeling the vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tilt
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Paloski, W. H. (Principal Investigator)
1995-01-01
Model simulations of the squirrel monkey vestibulo-ocular reflex (VOR) are presented for two motion paradigms: constant velocity eccentric rotation and roll tilt about a naso-occipital axis. The model represents the implementation of three hypotheses: the "internal model" hypothesis, the "gravito-inertial force (GIF) resolution" hypothesis, and the "compensatory VOR" hypothesis. The internal model hypothesis is based on the idea that the nervous system knows the dynamics of the sensory systems and implements this knowledge as an internal dynamic model. The GIF resolution hypothesis is based on the idea that the nervous system knows that gravity minus linear acceleration equals GIF and implements this knowledge by resolving the otolith measurement of GIF into central estimates of gravity and linear acceleration, such that the central estimate of gravity minus the central estimate of acceleration equals the otolith measurement of GIF. The compensatory VOR hypothesis is based on the idea that the VOR compensates for the central estimates of angular velocity and linear velocity, which sum in a near-linear manner. During constant velocity eccentric rotation, the model correctly predicts that: (1) the peak horizontal response is greater while "facing-motion" than with "back-to-motion"; (2) the axis of eye rotation shifts toward alignment with GIF; and (3) a continuous vertical response, slow phase downward, exists prior to deceleration. The model also correctly predicts that a torsional response during the roll rotation is the only velocity response observed during roll rotations about a naso-occipital axis. The success of this model in predicting the observed experimental responses suggests that the model captures the essence of the complex sensory interactions engendered by eccentric rotation and roll tilt.
Alvarez, Stéphanie; Timler, Carl J; Michalscheck, Mirja; Paas, Wim; Descheemaeker, Katrien; Tittonell, Pablo; Andersson, Jens A; Groot, Jeroen C J
2018-01-01
Creating typologies is a way to summarize the large heterogeneity of smallholder farming systems into a few farm types. Various methods exist, commonly using statistical analysis, to create these typologies. We demonstrate that the methodological decisions on data collection, variable selection, data-reduction and clustering techniques can bear a large impact on the typology results. We illustrate the effects of analysing the diversity from different angles, using different typology objectives and different hypotheses, on typology creation by using an example from Zambia's Eastern Province. Five separate typologies were created with principal component analysis (PCA) and hierarchical clustering analysis (HCA), based on three different expert-informed hypotheses. The greatest overlap between typologies was observed for the larger, wealthier farm types but for the remainder of the farms there were no clear overlaps between typologies. Based on these results, we argue that the typology development should be guided by a hypothesis on the local agriculture features and the drivers and mechanisms of differentiation among farming systems, such as biophysical and socio-economic conditions. That hypothesis is based both on the typology objective and on prior expert knowledge and theories of the farm diversity in the study area. We present a methodological framework that aims to integrate participatory and statistical methods for hypothesis-based typology construction. This is an iterative process whereby the results of the statistical analysis are compared with the reality of the target population as hypothesized by the local experts. Using a well-defined hypothesis and the presented methodological framework, which consolidates the hypothesis through local expert knowledge for the creation of typologies, warrants development of less subjective and more contextualized quantitative farm typologies.
Band Excitation for Scanning Probe Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesse, Stephen
2017-01-02
The Band Excitation (BE) technique for scanning probe microscopy uses a precisely determined waveform that contains specific frequencies to excite the cantilever or sample in an atomic force microscope to extract more information, and more reliable information from a sample. There are a myriad of details and complexities associated with implementing the BE technique. There is therefore a need to have a user friendly interface that allows typical microscopists access to this methodology. This software enables users of atomic force microscopes to easily: build complex band-excitation waveforms, set-up the microscope scanning conditions, configure the input and output electronics for generatemore » the waveform as a voltage signal and capture the response of the system, perform analysis on the captured response, and display the results of the measurement.« less
The Origins of Ethnolinguistic Diversity
Michalopoulos, Stelios
2013-01-01
This study explores the determinants of ethnolinguistic diversity within as well as across countries shedding light on its geographic origins. The empirical analysis conducted across countries, virtual countries and pairs of contiguous regions establishes that geographic variability, captured by variation in regional land quality and elevation, is a fundamental determinant of contemporary linguistic diversity. The findings are consistent with the proposed hypothesis that differences in land endowments gave rise to location-specific human capital, leading to the formation of localized ethnicities. PMID:25258434
STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation
2013-01-01
Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969
NASA Astrophysics Data System (ADS)
Atanasov, D.; Ascher, P.; Blaum, K.; Cakirli, R. B.; Cocolios, T. E.; George, S.; Goriely, S.; Herfurth, F.; Janka, H.-T.; Just, O.; Kowalska, M.; Kreim, S.; Kisler, D.; Litvinov, Yu. A.; Lunney, D.; Manea, V.; Neidherr, D.; Rosenbusch, M.; Schweikhard, L.; Welker, A.; Wienholtz, F.; Wolf, R. N.; Zuber, K.
2015-12-01
Masses adjacent to the classical waiting-point nuclide 130Cd have been measured by using the Penning-trap spectrometer ISOLTRAP at ISOLDE/CERN. We find a significant deviation of over 400 keV from earlier values evaluated by using nuclear beta-decay data. The new measurements show the reduction of the N =82 shell gap below the doubly magic 132Sn. The nucleosynthesis associated with the ejected wind from type-II supernovae as well as from compact object binary mergers is studied, by using state-of-the-art hydrodynamic simulations. We find a consistent and direct impact of the newly measured masses on the calculated abundances in the A =128 - 132 region and a reduction of the uncertainties from the precision mass input data.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
NASA Astrophysics Data System (ADS)
Zhou, Ming-Da; Hao, Sijie; Williams, Anthony J.; Harouaka, Ramdane A.; Schrand, Brett; Rawal, Siddarth; Ao, Zheng; Brennaman, Randall; Gilboa, Eli; Lu, Bo; Wang, Shuwen; Zhu, Jiyue; Datar, Ram; Cote, Richard; Tai, Yu-Chong; Zheng, Si-Yang
2014-12-01
The analysis of circulating tumour cells (CTCs) in cancer patients could provide important information for therapeutic management. Enrichment of viable CTCs could permit performance of functional analyses on CTCs to broaden understanding of metastatic disease. However, this has not been widely accomplished. Addressing this challenge, we present a separable bilayer (SB) microfilter for viable size-based CTC capture. Unlike other single-layer CTC microfilters, the precise gap between the two layers and the architecture of pore alignment result in drastic reduction in mechanical stress on CTCs, capturing them viably. Using multiple cancer cell lines spiked in healthy donor blood, the SB microfilter demonstrated high capture efficiency (78-83%), high retention of cell viability (71-74%), high tumour cell enrichment against leukocytes (1.7-2 × 103), and widespread ability to establish cultures post-capture (100% of cell lines tested). In a metastatic mouse model, SB microfilters successfully enriched viable mouse CTCs from 0.4-0.6 mL whole mouse blood samples and established in vitro cultures for further genetic and functional analysis. Our preliminary studies reflect the efficacy of the SB microfilter device to efficiently and reliably enrich viable CTCs in animal model studies, constituting an exciting technology for new insights in cancer research.
Social learning spreads knowledge about dangerous humans among American crows
Cornell, Heather N.; Marzluff, John M.; Pecoraro, Shannon
2012-01-01
Individuals face evolutionary trade-offs between the acquisition of costly but accurate information gained firsthand and the use of inexpensive but possibly less reliable social information. American crows (Corvus brachyrhynchos) use both sources of information to learn the facial features of a dangerous person. We exposed wild crows to a novel ‘dangerous face’ by wearing a unique mask as we trapped, banded and released 7–15 birds at five study sites near Seattle, WA, USA. An immediate scolding response to the dangerous mask after trapping by previously captured crows demonstrates individual learning, while an immediate response by crows that were not captured probably represents conditioning to the trapping scene by the mob of birds that assembled during the capture. Later recognition of dangerous masks by lone crows that were never captured is consistent with horizontal social learning. Independent scolding by young crows, whose parents had conditioned them to scold the dangerous mask, demonstrates vertical social learning. Crows that directly experienced trapping later discriminated among dangerous and neutral masks more precisely than did crows that learned through social means. Learning enabled scolding to double in frequency and spread at least 1.2 km from the place of origin over a 5 year period at one site. PMID:21715408
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew
2010-11-01
We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Quantitative model of transport-aperture coordination during reach-to-grasp movements.
Rand, Miya K; Shimansky, Y P; Hossain, Abul B M I; Stelmach, George E
2008-06-01
It has been found in our previous studies that the initiation of aperture closure during reach-to-grasp movements occurs when the hand distance to target crosses a threshold that is a function of peak aperture amplitude, hand velocity, and hand acceleration. Thus, a stable relationship between those four movement parameters is observed at the moment of aperture closure initiation. Based on the concept of optimal control of movements (Naslin 1969) and its application for reach-to-grasp movement regulation (Hoff and Arbib 1993), it was hypothesized that the mathematical equation expressing that relationship can be generalized to describe coordination between hand transport and finger aperture during the entire reach-to-grasp movement by adding aperture velocity and acceleration to the above four movement parameters. The present study examines whether this hypothesis is supported by the data obtained in experiments in which young adults performed reach-to-grasp movements in eight combinations of two reach-amplitude conditions and four movement-speed conditions. It was found that linear approximation of the mathematical model described the relationship among the six movement parameters for the entire aperture-closure phase with very high precision for each condition, thus supporting the hypothesis for that phase. Testing whether one mathematical model could approximate the data across all the experimental conditions revealed that it was possible to achieve the same high level of data-fitting precision only by including in the model two additional, condition-encoding parameters and using a nonlinear, artificial neural network-based approximator with two hidden layers comprising three and two neurons, respectively. This result indicates that transport-aperture coordination, as a specific relationship between the parameters of hand transport and finger aperture, significantly depends on the condition-encoding variables. The data from the aperture-opening phase also fit a linear model, whose coefficients were substantially different from those identified for the aperture-closure phase. This result supports the above hypothesis for the aperture-opening phase, and consequently, for the entire reach-to-grasp movement. However, the fitting precision was considerably lower than that for the aperture-closure phase, indicating significant trial-to-trial variability of transport-aperture coordination during the aperture-opening phase. Implications for understanding the neural mechanisms employed by the CNS for controlling reach-to-grasp movements and utilization of the mathematical model of transport-aperture coordination for data analysis are discussed.
Lantz, Van; Martínez-Espiñeira, Roberto
2008-04-01
The traditional environmental Kuznets curve (EKC) hypothesis postulates that environmental degradation follows an inverted U-shaped relationship with gross domestic product (GDP) per capita. We tested the EKC hypothesis with bird populations in 5 different habitats as environmental quality indicators. Because birds are considered environmental goods, for them the EKC hypothesis would instead be associated with a U-shaped relationship between bird populations and GDP per capita. In keeping with the literature, we included other variables in the analysis-namely, human population density and time index variables (the latter variable captured the impact of persistent and exogenous climate and/or policy changes on bird populations over time). Using data from 9 Canadian provinces gathered over 37 years, we used a generalized least-squares regression for each bird habitat type, which accounted for the panel structure of the data, the cross-sectional dependence across provinces in the residuals, heteroskedasticity, and fixed- or random-effect specifications of the models. We found evidence that supports the EKC hypothesis for 3 of the 5 bird population habitat types. In addition, the relationship between human population density and the different bird populations varied, which emphasizes the complex nature of the impact that human populations have on the environment. The relationship between the time-index variable and the different bird populations also varied, which indicates there are other persistent and significant influences on bird populations over time. Overall our EKC results were consistent with those found for threatened bird species, indicating that economic prosperity does indeed act to benefit some bird populations.
First Measurement of θ 13 From Delayed Neutron Capture on Hydrogen in the Double Chooz Experiment
Abe, Y.; Aberle, C.; dos Anjos, J. C.; ...
2013-04-27
The Double Chooz experiment has determined the value of the neutrino oscillation parameter θ 13 from an analysis of inverse beta decay interactions with neutron capture on hydrogen. The analysis presented here uses a three times larger fiducial volume than the standard Double Chooz assessment, which is restricted to a region doped with gadolinium (Gd), yielding an exposure of 113.1 GW-ton-years. The data sample used in this analysis is distinct from that of the Gd analysis, and the systematic uncertainties are also largely independent, with some exceptions, such as the reactor neutrino flux prediction. A combined rate- and energy-dependent fitmore » finds sin 22θ 13 = 0.097±0.034(stat.)±0.034(syst.), excluding the no-oscillation hypothesis at 2.0σ. This result is consistent with previous measurements of sin 22θ 13.« less
Automatic attention towards face or body as a function of mating motivation.
Lu, Hui Jing; Chang, Lei
2012-03-22
Because women's faces and bodies carry different cues of reproductive value, men may attend to different perceptual cues as functions of their long-term versus short-term mating motivations. We tested this hypothesis in three experiments on 135 male and 132 female participants. When influenced by short-term rather than long-term mating motivations, men's attention was captured by (Study 1), was shifted to (Study 2), and was distracted by (Study 3) the waist/hip area rather than the face on photographs of attractive women. Similar effects were not found among the female participants in response to photographs of attractive men. These results support the evolutionary view that, similar to the attentional selectivity found in other domains of life, male perceptual attention has evolved to selectively capture and hold reproductive information about the opposite sex as a function of short-term versus long-term mating goals.
Age mediation of frontoparietal activation during visual feature search.
Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-11-15
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abidin, Anas Z.; Chockanathan, Udaysankar; DSouza, Adora M.; Inglese, Matilde; Wismüller, Axel
2017-03-01
Clinically Isolated Syndrome (CIS) is often considered to be the first neurological episode associated with Multiple sclerosis (MS). At an early stage the inflammatory demyelination occurring in the CNS can manifest as a change in neuronal metabolism, with multiple asymptomatic white matter lesions detected in clinical MRI. Such damage may induce topological changes of brain networks, which can be captured by advanced functional MRI (fMRI) analysis techniques. We test this hypothesis by capturing the effective relationships of 90 brain regions, defined in the Automated Anatomic Labeling (AAL) atlas, using a large-scale Granger Causality (lsGC) framework. The resulting networks are then characterized using graph-theoretic measures that quantify various network topology properties at a global as well as at a local level. We study for differences in these properties in network graphs obtained for 18 subjects (10 male and 8 female, 9 with CIS and 9 healthy controls). Global network properties captured trending differences with modularity and clustering coefficient (p<0.1). Additionally, local network properties, such as local efficiency and the strength of connections, captured statistically significant (p<0.01) differences in some regions of the inferior frontal and parietal lobe. We conclude that multivariate analysis of fMRI time-series can reveal interesting information about changes occurring in the brain in early stages of MS.
The Universal Plausibility Metric (UPM) & Principle (UPP)
2009-01-01
Background Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of ξ is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. Conclusion No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (ξ < 1). PMID:19958539
Attention capture by abrupt onsets: re-visiting the priority tag model.
Sunny, Meera M; von Mühlenen, Adrian
2013-01-01
Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
What's Next in Complex Networks? Capturing the Concept of Attacking Play in Invasive Team Sports.
Ramos, João; Lopes, Rui J; Araújo, Duarte
2018-01-01
The evolution of performance analysis within sports sciences is tied to technology development and practitioner demands. However, how individual and collective patterns self-organize and interact in invasive team sports remains elusive. Social network analysis has been recently proposed to resolve some aspects of this problem, and has proven successful in capturing collective features resulting from the interactions between team members as well as a powerful communication tool. Despite these advances, some fundamental team sports concepts such as an attacking play have not been properly captured by the more common applications of social network analysis to team sports performance. In this article, we propose a novel approach to team sports performance centered on sport concepts, namely that of an attacking play. Network theory and tools including temporal and bipartite or multilayered networks were used to capture this concept. We put forward eight questions directly related to team performance to discuss how common pitfalls in the use of network tools for capturing sports concepts can be avoided. Some answers are advanced in an attempt to be more precise in the description of team dynamics and to uncover other metrics directly applied to sport concepts, such as the structure and dynamics of attacking plays. Finally, we propose that, at this stage of knowledge, it may be advantageous to build up from fundamental sport concepts toward complex network theory and tools, and not the other way around.
Abe, Y.; Appel, S.; Abrahão, T.; ...
2016-01-27
We observed a measurement of the Double Chooz collaboration and the neutrino mixing angle θ 13 using reactormore » $$\\bar{v}$$ e via the inverse beta decay reaction in which the neutron is captured on hydrogen. Our measurement is based on 462.72 live days data, approximately twice as much data as in the previous such analysis, collected with a detector positioned at an average distance of 1050 m from two reactor cores. Several novel techniques have been developed to achieve significant reductions of the backgrounds and systematic uncertainties. Accidental coincidences, the dominant background in this analysis, are suppressed by more than an order of magnitude with respect to our previous publication by a multi-variate analysis. Furthermore, these improvements demonstrate the capability of precise measurement of reactor $$\\bar{v}$$ e without gadolinium loading. Spectral distortions from the $$\\bar{v}$$ e reactor flux predictions previously reported with the neutron capture on gadolinium events are confirmed in the independent data sample presented here. A value of sin 2 2θ 13= 0.095 0.039 +0.038 (stat+syst) is obtained from a fit to the observed event rate as a function of the reactor power, a method insensitive to the energy spectrum shape. A simultaneous fit of the hydrogen capture events and of the gadolinium capture events yields a measurement of sin 2 2θ 13 = 0.088 ± 0.033(stat+syst).« less
Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.
Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P
2015-09-01
Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Abidin, Anas Zainul; D'Souza, Adora M.; Nagarajan, Mahesh B.; Wismüller, Axel
2016-03-01
About 50% of subjects infected with HIV present deficits in cognitive domains, which are known collectively as HIV associated neurocognitive disorder (HAND). The underlying synaptodendritic damage can be captured using resting state functional MRI, as has been demonstrated by a few earlier studies. Such damage may induce topological changes of brain connectivity networks. We test this hypothesis by capturing the functional interdependence of 90 brain network nodes using a Mutual Connectivity Analysis (MCA) framework with non-linear time series modeling based on Generalized Radial Basis function (GRBF) neural networks. The network nodes are selected based on the regions defined in the Automated Anatomic Labeling (AAL) atlas. Each node is represented by the average time series of the voxels of that region. The resulting networks are then characterized using graph-theoretic measures that quantify various network topology properties at a global as well as at a local level. We tested for differences in these properties in network graphs obtained for 10 subjects (6 male and 4 female, 5 HIV+ and 5 HIV-). Global network properties captured some differences between these subject cohorts, though significant differences were seen only with the clustering coefficient measure. Local network properties, such as local efficiency and the degree of connections, captured significant differences in regions of the frontal lobe, precentral and cingulate cortex amongst a few others. These results suggest that our method can be used to effectively capture differences occurring in brain network connectivity properties revealed by resting-state functional MRI in neurological disease states, such as HAND.
Kawahara, Akito Y; Breinholt, Jesse W; Espeland, Marianne; Storer, Caroline; Plotkin, David; Dexter, Kelly M; Toussaint, Emmanuel F A; St Laurent, Ryan A; Brehm, Gunnar; Vargas, Sergio; Forero, Dimitri; Pierce, Naomi E; Lohman, David J
2018-06-11
The Neotropical moth-like butterflies (Hedylidae) are perhaps the most unusual butterfly family. In addition to being species-poor, this family is predominantly nocturnal and has anti-bat ultrasound hearing organs. Evolutionary relationships among the 36 described species are largely unexplored. A new, target capture, anchored hybrid enrichment probe set ('BUTTERFLY2.0') was developed to infer relationships of hedylids and some of their butterfly relatives. The probe set includes 13 genes that have historically been used in butterfly phylogenetics. Our dataset comprised of up to 10,898 aligned base pairs from 22 hedylid species and 19 outgroups. Eleven of the thirteen loci were successfully captured from all samples, and the remaining loci were captured from ≥94% of samples. The inferred phylogeny was consistent with recent molecular studies by placing Hedylidae sister to Hesperiidae, and the tree had robust support for 80% of nodes. Our results are also consistent with morphological studies, with Macrosoma tipulata as the sister species to all remaining hedylids, followed by M. semiermis sister to the remaining species in the genus. We tested the hypothesis that nocturnality evolved once from diurnality in Hedylidae, and demonstrate that the ancestral condition was likely diurnal, with a shift to nocturnality early in the diversification of this family. The BUTTERFLY2.0 probe set includes standard butterfly phylogenetics markers, captures sequences from decades-old museum specimens, and is a cost-effective technique to infer phylogenetic relationships of the butterfly tree of life. Copyright © 2018 Elsevier Inc. All rights reserved.
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.
Li, Yan; Gu, Leon; Kanade, Takeo
2011-09-01
Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
Nonnormality and Divergence in Posttreatment Alcohol Use
Witkiewitz, Katie; van der Maas, Han L. J.; Hufford, Michael R.; Marlatt, G. Alan
2007-01-01
Alcohol lapses are the modal outcome following treatment for alcohol use disorders, yet many alcohol researchers have encountered limited success in the prediction and prevention of relapse. One hypothesis is that lapses are unpredictable, but another possibility is the complexity of the relapse process is not captured by traditional statistical methods. Data from Project Matching Alcohol Treatments to Client Heterogeneity (Project MATCH), a multisite alcohol treatment study, were reanalyzed with 2 statistical methodologies: catastrophe and 2-part growth mixture modeling. Drawing on previous investigations of self-efficacy as a dynamic predictor of relapse, the current study revisits the self-efficacy matching hypothesis, which was not statistically supported in Project MATCH. Results from both the catastrophe and growth mixture analyses demonstrated a dynamic relationship between self-efficacy and drinking outcomes. The growth mixture analyses provided evidence in support of the original matching hypothesis: Individuals with lower self-efficacy who received cognitive behavior therapy drank far less frequently than did those with low self-efficacy who received motivational therapy. These results highlight the dynamical nature of the relapse process and the importance of the use of methodologies that accommodate this complexity when evaluating treatment outcomes. PMID:17516769
Microsatellite analysis of medfly bioinfestations in California.
Bonizzoni, M; Zheng, L; Guglielmino, C R; Haymer, D S; Gasperi, G; Gomulski, L M; Malacrida, A R
2001-10-01
The Mediterranean fruit fly, Ceratitis capitata, is a destructive agricultural pest with a long history of invasion success. This pest has been affecting different regions of the United States for the past 30 years, but a number of studies of medfly bioinfestations has focused on the situation in California. Although some progress has been made in terms of establishing the origin of infestations, the overall status of this pest in this area remains controversial. Specifically, do flies captured over the years represent independent infestations or the persistence of a resident population? We present an effort to answer this question based on the use of multilocus genotyping. Ten microsatellite loci were used to analyse 109 medflies captured in several infestations within California between 1992 and 1998. Using these same markers, 242 medflies from regions of the world having 'established' populations of this pest including Hawaii, Guatemala, El Salvador, Ecuador, Brazil, Argentina and Peru, were also analysed. Although phylogenetic analysis, amova analysis, the IMMANC assignment test and geneclass exclusion test analysis suggest that some of the medflies captured in California are derived from independent invasion events, analysis of specimens from the Los Angeles basin provides support for the hypothesis that an endemic population, probably derived from Guatemala, has been established.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Gao, Xinliu; Lin, Hui; Krantz, Carsten; Garnier, Arlette; Flarakos, Jimmy; Tse, Francis L S; Li, Wenkui
2016-01-01
Factor P (Properdin), an endogenous glycoprotein, plays a key role in innate immune defense. Its quantification is important for understanding the pharmacodynamics (PD) of drug candidate(s). In the present work, an immunoaffinity capturing LC-MS/MS method has been developed and validated for the first time for the quantification of factor P in monkey serum with a dynamic range of 125 to 25,000 ng/ml using the calibration standards and QCs prepared in factor P depleted monkey serum. The intra- and inter-run precision was ≤7.2% (CV) and accuracy within ±16.8% (%Bias) across all QC levels evaluated. Results of other evaluations (e.g., stability) all met the acceptance criteria. The validated method was robust and implemented in support of a preclinical PK/PD study.
Launch and capture of a single particle in a pulse-laser-assisted dual-beam fiber-optic trap
NASA Astrophysics Data System (ADS)
Fu, Zhenhai; She, Xuan; Li, Nan; Hu, Huizhu
2018-06-01
The rapid loading and manipulation of microspheres in optical trap is important for its applications in optomechanics and precision force sensing. We investigate the microsphere behavior under coaction of a dual-beam fiber-optic trap and a pulse laser beam, which reveals a launched microsphere can be effectively captured in a spatial region. A suitable order of pulse duration for launch is derived according to the calculated detachment energy threshold of pulse laser. Furthermore, we illustrate the effect of structural parameters on the launching process, including the spot size of pulse laser, the vertical displacement of beam waist and the initial position of microsphere. Our result will be instructive in the optimal design of the pulse-laser-assisted optical tweezers for controllable loading mechanism of optical trap.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2017-09-07
Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.
Search for new physics in a precise 20F beta spectrum shape measurement
NASA Astrophysics Data System (ADS)
George, Elizabeth; Voytas, Paul; Chuna, Thomas; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Paulauskas, Stanley; Weisshaar, Dirk; Ban, Gilles; Flechard, Xavier; Lienard, Etienne
2015-10-01
We are carrying out a measurement of the shape of the energy spectrum of β particles from 20F decay. We aim to achieve a relative precision below 3%, representing an order of magnitude improvement compared to previous experiments. This level of precision will enable a test of the so-called strong form of the conserved vector current (CVC) hypothesis, and should also enable us to place competitive limits on the contributions of exotic tensor couplings in beta decay. In order to control systematic effects, we are using a technique that takes advantage of high energy radioactive beams at the NSCL to implant the decaying nuclei in a scintillation detector deep enough that the emitted beta particles cannot escape. The β-particle energy is measured with the implantation detector after switching off the beam implantation. Ancillary detectors are used to tag the 1.633-MeV γ-rays following the β decay for coincidence measurements in order to reduce backgrounds. We will give an overview and report on the status of the experiment.
A neural measure of precision in visual working memory.
Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward
2013-05-01
Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.
High-Rate Data-Capture for an Airborne Lidar System
NASA Technical Reports Server (NTRS)
Valett, Susan; Hicks, Edward; Dabney, Philip; Harding, David
2012-01-01
A high-rate data system was required to capture the data for an airborne lidar system. A data system was developed that achieved up to 22 million (64-bit) events per second sustained data rate (1408 million bits per second), as well as short bursts (less than 4 s) at higher rates. All hardware used for the system was off the shelf, but carefully selected to achieve these rates. The system was used to capture laser fire, single-photon detection, and GPS data for the Slope Imaging Multi-polarization Photo-counting Lidar (SIMPL). However, the system has applications for other laser altimeter systems (waveform-recording), mass spectroscopy, xray radiometry imaging, high-background- rate ranging lidar, and other similar areas where very high-speed data capture is needed. The data capture software was used for the SIMPL instrument that employs a micropulse, single-photon ranging measurement approach and has 16 data channels. The detected single photons are from two sources those reflected from the target and solar background photons. The instrument is non-gated, so background photons are acquired for a range window of 13 km and can comprise many times the number of target photons. The highest background rate occurs when the atmosphere is clear, the Sun is high, and the target is a highly reflective surface such as snow. Under these conditions, the total data rate for the 16 channels combined is expected to be approximately 22 million events per second. For each photon detection event, the data capture software reads the relative time of receipt, with respect to a one-per-second absolute time pulse from a GPS receiver, from an event timer card with 0.1-ns precision, and records that information to a RAID (Redundant Array of Independent Disks) storage device. The relative time of laser pulse firings must also be read and recorded with the same precision. Each of the four event timer cards handles the throughput from four of the channels. For each detection event, a flag is recorded that indicates the source channel. To accommodate the expected maximum count rate and also handle the other extreme of very low rates occurring during nighttime operations, the software requests a set amount of data from each of the event timer cards and buffers the data. The software notes if any of the cards did not return all the data requested and then accommodates that lower rate. The data is buffered to minimize the I/O overhead of writing the data to storage. Care was taken to optimize the reads from the cards, the speed of the I/O bus, and RAID configuration.
Real-Time 3D Ultrasound for Physiological Monitoring 22258.
1999-10-01
their software to acquire positioning information using a high precision mechanical arm ( MicroScribe arm from Immersion Corp., San Jose, CA) instead of...mechanical arm (Immersion MicroScribe ™) for 3D data acquisition, also adopted by EchoTech for 3D FreeScan. • Medical quality video capture by a...MHz Dell Dimen- sion XPS computer9 (under desk), MUSTPAC-2 Vir- tual Ultrasound Probe based on the Microscribe 3D articulated arm10 (on table
NASA Technical Reports Server (NTRS)
Allton, J. H.; Gonzalez, C. P.; Allums, K. K.
2017-01-01
Recent refinement of analysis of ACE/SWICS data (Advanced Composition Explorer/Solar Wind Ion Composition Spectrometer) and of onboard data for Genesis Discovery Mission of 3 regimes of solar wind at Earth-Sun L1 make it an appropriate time to update the availability and condition of Genesis samples specifically collected in these three regimes and currently curated at Johnson Space Center. ACE/SWICS spacecraft data indicate that solar wind flow types emanating from the interstream regions, from coronal holes and from coronal mass ejections are elementally and isotopically fractionated in different ways from the solar photosphere, and that correction of solar wind values to photosphere values is non-trivial. Returned Genesis solar wind samples captured very different kinds of information about these three regimes than spacecraft data. Samples were collected from 11/30/2001 to 4/1/2004 on the declining phase of solar cycle 23. Meshik, et al is an example of precision attainable. Earlier high precision laboratory analyses of noble gases collected in the interstream, coronal hole and coronal mass ejection regimes speak to degree of fractionation in solar wind formation and models that laboratory data support. The current availability and condition of samples captured on collector plates during interstream slow solar wind, coronal hole high speed solar wind and coronal mass ejections are de-scribed here for potential users of these samples.
Effects of motor congruence on visual working memory.
Quak, Michel; Pecher, Diane; Zeelenberg, Rene
2014-10-01
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.
Effects of grasp compatibility on long-term memory for objects.
Canits, Ivonne; Pecher, Diane; Zeelenberg, René
2018-01-01
Previous studies have shown action potentiation during conceptual processing of manipulable objects. In four experiments, we investigated whether these motor actions also play a role in long-term memory. Participants categorized objects that afforded either a power grasp or a precision grasp as natural or artifact by grasping cylinders with either a power grasp or a precision grasp. In all experiments, responses were faster when the affordance of the object was compatible with the type of grasp response. However, subsequent free recall and recognition memory tasks revealed no better memory for object pictures and object names for which the grasp affordance was compatible with the grasp response. The present results therefore do not support the hypothesis that motor actions play a role in long-term memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of trap capture in a geographically closed population of brown treesnakes on Guam
Tyrrell, C.L.; Christy, M.T.; Rodda, G.H.; Yackel Adams, A.A.; Ellingson, A.R.; Savidge, J.A.; Dean-Bradley, K.; Bischof, R.
2009-01-01
1. Open population mark-recapture analysis of unbounded populations accommodates some types of closure violations (e.g. emigration, immigration). In contrast, closed population analysis of such populations readily allows estimation of capture heterogeneity and behavioural response, but requires crucial assumptions about closure (e.g. no permanent emigration) that are suspect and rarely tested empirically. 2. In 2003, we erected a double-sided barrier to prevent movement of snakes in or out of a 5-ha semi-forested study site in northern Guam. This geographically closed population of >100 snakes was monitored using a series of transects for visual searches and a 13 ?? 13 trapping array, with the aim of marking all snakes within the site. Forty-five marked snakes were also supplemented into the resident population to quantify the efficacy of our sampling methods. We used the program mark to analyse trap captures (101 occasions), referenced to census data from visual surveys, and quantified heterogeneity, behavioural response, and size bias in trappability. Analytical inclusion of untrapped individuals greatly improved precision in the estimation of some covariate effects. 3. A novel discovery was that trap captures for individual snakes consisted of asynchronous bouts of high capture probability lasting about 7 days (ephemeral behavioural effect). There was modest behavioural response (trap happiness) and significant latent (unexplained) heterogeneity, with small influences on capture success of date, gender, residency status (translocated or not), and body condition. 4. Trapping was shown to be an effective tool for eradicating large brown treesnakes Boiga irregularis (>900 mm snout-vent length, SVL). 5. Synthesis and applications. Mark-recapture modelling is commonly used by ecological managers to estimate populations. However, existing models involve making assumptions about either closure violations or response to capture. Physical closure of our population on a landscape scale allowed us to determine the relative importance of covariates influencing capture probability (body size, trappability periods, and latent heterogeneity). This information was used to develop models in which different segments of the population could be assigned different probabilities of capture, and suggests that modelling of open populations should incorporate easily measured, but potentially overlooked, parameters such as body size or condition. ?? 2008 The Authors.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Porous materials with pre-designed single-molecule traps for CO2 selective adsorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, JR; Yu, JM; Lu, WG
2013-02-26
Despite tremendous efforts, precise control in the synthesis of porous materials with pre-designed pore properties for desired applications remains challenging. Newly emerged porous metal-organic materials, such as metal-organic polyhedra and metal-organic frameworks, are amenable to design and property tuning, enabling precise control of functionality by accurate design of structures at the molecular level. Here we propose and validate, both experimentally and computationally, a precisely designed cavity, termed a 'single-molecule trap', with the desired size and properties suitable for trapping target CO2 molecules. Such a single-molecule trap can strengthen CO2-host interactions without evoking chemical bonding, thus showing potential for CO2 capture.more » Molecular single-molecule traps in the form of metal-organic polyhedra are designed, synthesised and tested for selective adsorption of CO2 over N-2 and CH4, demonstrating the trapping effect. Building these pre-designed single-molecule traps into extended frameworks yields metal-organic frameworks with efficient mass transfer, whereas the CO2 selective adsorption nature of single-molecule traps is preserved.« less
NASA Astrophysics Data System (ADS)
Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.
2009-10-01
A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.
Aerospace Laser Ignition/Ablation Variable High Precision Thruster
NASA Technical Reports Server (NTRS)
Campbell, Jonathan W. (Inventor); Edwards, David L. (Inventor); Campbell, Jason J. (Inventor)
2015-01-01
A laser ignition/ablation propulsion system that captures the advantages of both liquid and solid propulsion. A reel system is used to move a propellant tape containing a plurality of propellant material targets through an ignition chamber. When a propellant target is in the ignition chamber, a laser beam from a laser positioned above the ignition chamber strikes the propellant target, igniting the propellant material and resulting in a thrust impulse. The propellant tape is advanced, carrying another propellant target into the ignition chamber. The propellant tape and ignition chamber are designed to ensure that each ignition event is isolated from the remaining propellant targets. Thrust and specific impulse may by precisely controlled by varying the synchronized propellant tape/laser speed. The laser ignition/ablation propulsion system may be scaled for use in small and large applications.
Time- and Cost-Efficient Identification of T-DNA Insertion Sites through Targeted Genomic Sequencing
Lepage, Étienne; Zampini, Éric; Boyle, Brian; Brisson, Normand
2013-01-01
Forward genetic screens enable the unbiased identification of genes involved in biological processes. In Arabidopsis, several mutant collections are publicly available, which greatly facilitates such practice. Most of these collections were generated by agrotransformation of a T-DNA at random sites in the plant genome. However, precise mapping of T-DNA insertion sites in mutants isolated from such screens is a laborious and time-consuming task. Here we report a simple, low-cost and time efficient approach to precisely map T-DNA insertions simultaneously in many different mutants. By combining sequence capture, next-generation sequencing and 2D-PCR pooling, we developed a new method that allowed the rapid localization of T-DNA insertion sites in 55 out of 64 mutant plants isolated in a screen for gyrase inhibition hypersensitivity. PMID:23951038
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
Fifty Years of Mountain Passes: A Perspective on Dan Janzen's Classic Article.
Sheldon, Kimberly S; Huey, Raymond B; Kaspari, Michael; Sanders, Nathan J
2018-05-01
In 1967, Dan Janzen published "Why Mountain Passes Are Higher in the Tropics" in The American Naturalist. Janzen's seminal article has captured the attention of generations of biologists and continues to inspire theoretical and empirical work. The underlying assumptions and derived predictions are broadly synthetic and widely applicable. Consequently, Janzen's "seasonality hypothesis" has proven relevant to physiology, climate change, ecology, and evolution. To celebrate the fiftieth anniversary of this highly influential article, we highlight the past, present, and future of this work and include a unique historical perspective from Janzen himself.
Measuring continuous baseline covariate imbalances in clinical trial data
Ciolino, Jody D.; Martin, Renee’ H.; Zhao, Wenle; Hill, Michael D.; Jauch, Edward C.; Palesch, Yuko Y.
2014-01-01
This paper presents and compares several methods of measuring continuous baseline covariate imbalance in clinical trial data. Simulations illustrate that though the t-test is an inappropriate method of assessing continuous baseline covariate imbalance, the test statistic itself is a robust measure in capturing imbalance in continuous covariate distributions. Guidelines to assess effects of imbalance on bias, type I error rate, and power for hypothesis test for treatment effect on continuous outcomes are presented, and the benefit of covariate-adjusted analysis (ANCOVA) is also illustrated. PMID:21865270
Synchronization of eukaryotic flagella in vivo: from two to thousands
NASA Astrophysics Data System (ADS)
Goldstein, Raymond E.
2012-02-01
From unicellular organisms as small as a few microns to the largest vertebrates on Earth, we find groups of beating flagella or cilia that exhibit striking spatiotemporal organization. This may take the form of precise frequency and phase locking, as frequently found in the swimming of green algae, or beating with long-wavelength phase modulations known as metachronal waves, seen in ciliates such as Paramecium and in our own respiratory systems. The remarkable similarity in the underlying molecular structure of flagella across the whole eukaryotic world leads naturally to the hypothesis that a similarly universal mechanism might be responsible for synchronization. Although this mechanism is poorly understood, one appealing hypothesis is that it results from hydrodynamic interactions between flagella. This talk will summarize recent work using the unicellular alga Chlamydomonas reinhardtii and its multicellular cousin Volvox carteri to study in detail the nature of flagellar synchronization and its possible hydrodynamic origins.
NASA Technical Reports Server (NTRS)
Junaedi, Christian; Hawley, Kyle; Walsh, Dennis; Roychoudhury, Subir; Busby, Stacy A.; Abney, Morgan B.; Perry, Jay L.; Knox, James C.
2012-01-01
The utilization of CO2 to produce (or recycle) life support consumables, such as O2 and H2O, and to generate propellant fuels is an important aspect of NASA's concept for future, long duration planetary exploration. One potential approach is to capture and use CO2 from the Martian atmosphere to generate the consumables and propellant fuels. Precision Combustion, Inc. (PCI), with support from NASA, continues to develop its regenerable adsorber technology for capturing CO2 from gaseous atmospheres (for cabin atmosphere revitalization and in-situ resource utilization applications) and its Sabatier reactor for converting CO2 to methane and water. Both technologies are based on PCI's Microlith(R) substrates and have been demonstrated to reduce size, weight, and power consumption during CO2 capture and methanation process. For adsorber applications, the Microlith substrates offer a unique resistive heating capability that shows potential for short regeneration time and reduced power requirements compared to conventional systems. For the Sabatier applications, the combination of the Microlith substrates and durable catalyst coating permits efficient CO2 methanation that favors high reactant conversion, high selectivity, and durability. Results from performance testing at various operating conditions will be presented. An effort to optimize the Sabatier reactor and to develop a bench-top Sabatier Development Unit (SDU) will be discussed.
Geil, Mark D
2007-01-01
Computer-aided design (CAD) and computer-aided manufacturing systems have been adapted for specific use in prosthetics, providing practitioners with a means to digitally capture the shape of a patient's limb, modify the socket model using software, and automatically manufacture either a positive model to be used in the fabrication of a socket or the socket itself. The digital shape captured is a three-dimensional (3-D) model from which standard anthropometric measures can be easily obtained. This study recorded six common anthropometric dimensions from CAD shape files of three foam positive models of the residual limbs of persons with transtibial amputations. Two systems were used to obtain 3-D models of the residual limb, a noncontact optical system and a contact-based electromagnetic field system, and both experienced practitioners and prosthetics students conducted measurements. Measurements were consistent; the mean range (difference of maximum and minimum) across all measurements was 0.96 cm. Both systems provided similar results, and both groups used the systems consistently. Students were slightly more consistent than practitioners but not to a clinically significant degree. Results also compared favorably with traditional measurement, with differences versus hand measurements about 5 mm. These results suggest the routine use of digital shape capture for collection of patient volume information.
Chang, E-E; Chen, Tse-Lun; Pan, Shu-Yuan; Chen, Yi-Hung; Chiang, Pen-Chi
2013-09-15
In this study, direct and indirect carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was carried out via a rotating packed bed (RPB). The solid products were qualitatively characterized by scanning electron microscopy (SEM) and X-ray diffraction (XRD) and quantitatively analyzed with thermogravimetric analysis (TGA). The leachate was analyzed with inductively coupled plasma-optical emission spectroscopy (ICP-OES). The results indicate that the maximum achievable carbonation conversion (MACC) of BOFS was 90.7%, corresponding to a capture capacity of 0.277 g CO₂/g of BOFS, by direct carbonation with CRW under a rotation speed of 750 rpm at 30 °C for 20 min. In addition, CO₂ mass balance among the gas, liquid, and solid phases within an RPB was well-developed, with an error less than 10%, to confirm the actual CO₂ capture capacity of BOFS with precision and accuracy. Furthermore, a reaction kinetic model based on mass balance was established to determine the reaction rate constant for various liquid agents (CRW and pure water). It was concluded that co-utilization of alkaline wastes including BOFS and CRW via the RPB is a novel approach for both enhancing CO₂ capture capacity and reducing the environmental impacts of alkaline wastes. Copyright © 2013 Elsevier B.V. All rights reserved.
2008-12-01
for Layer 3 data capture: NetPoll ncap tget Monitor session Radio System switch router User App interface box GPS This model applies to most fixed...developed a lightweight, custom implementation, termed ncap . As described in Section 3.1, the Ground Truth System provides a linkage between host...computer CPU time and GPS time, and ncap leverages this to perform highly precise (əmsec) time tagging of offered and received packets. Such
Fast, accurate, small-scale 3D scene capture using a low-cost depth sensor
Carey, Nicole; Nagpal, Radhika; Werfel, Justin
2017-01-01
Commercially available depth sensing devices are primarily designed for domains that are either macroscopic, or static. We develop a solution for fast microscale 3D reconstruction, using off-the-shelf components. By the addition of lenses, precise calibration of camera internals and positioning, and development of bespoke software, we turn an infrared depth sensor designed for human-scale motion and object detection into a device with mm-level accuracy capable of recording at up to 30Hz. PMID:28758159
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Biology-Culture Co-evolution in Finite Populations.
de Boer, Bart; Thompson, Bill
2018-01-19
Language is the result of two concurrent evolutionary processes: biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: the speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: a generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: in realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
The topographical model of multiple sclerosis
Cook, Karin; De Nino, Scott; Fletcher, Madhuri
2016-01-01
Relapses and progression contribute to multiple sclerosis (MS) disease course, but neither the relationship between them nor the spectrum of clinical heterogeneity has been fully characterized. A hypothesis-driven, biologically informed model could build on the clinical phenotypes to encompass the dynamic admixture of factors underlying MS disease course. In this medical hypothesis, we put forth a dynamic model of MS disease course that incorporates localization and other drivers of disability to propose a clinical manifestation framework that visualizes MS in a clinically individualized way. The topographical model encapsulates 5 factors (localization of relapses and causative lesions; relapse frequency, severity, and recovery; and progression rate), visualized utilizing dynamic 3-dimensional renderings. The central hypothesis is that, like symptom recrudescence in Uhthoff phenomenon and pseudoexacerbations, progression clinically recapitulates prior relapse symptoms and unmasks previously silent lesions, incrementally revealing underlying lesion topography. The model uses real-time simulation software to depict disease course archetypes and illuminate several well-described but poorly reconciled phenomena including the clinical/MRI paradox and prognostic significance of lesion location and burden on disease outcomes. Utilization of this model could allow for earlier and more clinically precise identification of progressive MS and predictive implications can be empirically tested. PMID:27648465
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Interspecific variation in prey capture behavior by co-occurring Nepenthes pitcher plants
Chin, Lijin; Chung, Arthur YC; Clarke, Charles
2014-01-01
Pitcher plants of the genus Nepenthes capture a wide range of arthropod prey for nutritional benefit, using complex combinations of visual and olfactory signals and gravity-driven pitfall trapping mechanisms. In many localities throughout Southeast Asia, several Nepenthes different species occur in mixed populations. Often, the species present at any given location have strongly divergent trap structures and preliminary surveys indicate that different species trap different combinations of arthropod prey, even when growing at the same locality. On this basis, it has been proposed that co-existing Nepenthes species may be engaged in niche segregation with regards to arthropod prey, avoiding direct competition with congeners by deploying traps that have modifications that enable them to target specific prey types. We examined prey capture among 3 multi-species Nepenthes populations in Borneo, finding that co-existing Nepenthes species do capture different combinations of prey, but that significant interspecific variations in arthropod prey combinations can often be detected only at sub-ordinal taxonomic ranks. In all lowland Nepenthes species examined, the dominant prey taxon is Formicidae, but montane Nepenthes trap few (or no) ants and 2 of the 3 species studied have evolved to target alternative sources of nutrition, such as tree shrew feces. Using similarity and null model analyses, we detected evidence for niche segregation with regards to formicid prey among 5 lowland, sympatric Nepenthes species in Sarawak. However, we were unable to determine whether these results provide support for the niche segregation hypothesis, or whether they simply reflect unquantified variation in heterogeneous habitats and/or ant communities in the study sites. These findings are used to propose improvements to the design of field experiments that seek to test hypotheses about targeted prey capture patterns in Nepenthes. PMID:24481246
Chin, Lijin; Chung, Arthur Y C; Clarke, Charles
2014-01-01
Pitcher plants of the genus Nepenthes capture a wide range of arthropod prey for nutritional benefit, using complex combinations of visual and olfactory signals and gravity-driven pitfall trapping mechanisms. In many localities throughout Southeast Asia, several Nepenthes different species occur in mixed populations. Often, the species present at any given location have strongly divergent trap structures and preliminary surveys indicate that different species trap different combinations of arthropod prey, even when growing at the same locality. On this basis, it has been proposed that co-existing Nepenthes species may be engaged in niche segregation with regards to arthropod prey, avoiding direct competition with congeners by deploying traps that have modifications that enable them to target specific prey types. We examined prey capture among 3 multi-species Nepenthes populations in Borneo, finding that co-existing Nepenthes species do capture different combinations of prey, but that significant interspecific variations in arthropod prey combinations can often be detected only at sub-ordinal taxonomic ranks. In all lowland Nepenthes species examined, the dominant prey taxon is Formicidae, but montane Nepenthes trap few (or no) ants and 2 of the 3 species studied have evolved to target alternative sources of nutrition, such as tree shrew feces. Using similarity and null model analyses, we detected evidence for niche segregation with regards to formicid prey among 5 lowland, sympatric Nepenthes species in Sarawak. However, we were unable to determine whether these results provide support for the niche segregation hypothesis, or whether they simply reflect unquantified variation in heterogeneous habitats and/or ant communities in the study sites. These findings are used to propose improvements to the design of field experiments that seek to test hypotheses about targeted prey capture patterns in Nepenthes.
fMRI capture of auditory hallucinations: Validation of the two-steps method.
Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud
2017-10-01
Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Anderson, Paul A; Huber, Daniel R; Berzins, Ilze K
2012-12-01
A number of captive sandtiger sharks (Carcharias taurus) in public aquaria have developed spinal deformities over the past decade, ranging in severity from mild curvature to spinal fracture and severe subluxation. To determine the frequency and etiologic basis of this disease, U.S. public aquaria participated in a two-stage epidemiologic study of resident sharks: 1) a history and husbandry survey and 2) hematology, clinical chemistry, and radiography conducted during health exams. Eighteen aquaria submitted data, samples, or both from 73 specimens, including 19 affected sharks (26%). Sharks caught off the Rhode Island coast or by pound net were smaller at capture and demonstrated a higher prevalence of deformity than did larger sharks caught from other areas via hook and line. Relative to healthy sharks, affected sharks were deficient in zinc, potassium, and vitamins C and E. Capture and transport results lead to two likely etiologic hypotheses: 1) that the pound-net capture process induces spinal trauma that becomes exacerbated over time in aquarium environments or 2) that small (and presumably young) sharks caught by pound net are exposed to disease-promoting conditions (including diet or habitat deficiencies) in aquaria during the critical growth phase of their life history. The last hypothesis is further supported by nutrient deficiencies among affected sharks documented in this study; potassium, zinc, and vitamin C play critical roles in proper cartilage-collagen development and maintenance. These correlative findings indicate that public aquaria give careful consideration to choice of collection methods and size at capture and supplement diets to provide nutrients required for proper development and maintenance of cartilaginous tissue.
Schoeberl, Tobias; Ansorge, Ulrich
2018-05-15
Prior research suggested that attentional capture by subliminal abrupt onset cues is stimulus driven. In these studies, reacting was faster when a searched-for target appeared at the location of a preceding abrupt onset cue compared to when the same target appeared at a location away from the cue (cueing effect), although the earlier onset of the cue was subliminal, because it appeared as one out of three horizontally aligned placeholders with a lead time that was too short to be noticed by the participants. Because the cueing effects seemed to be independent of top-down search settings for target features, the effect was attributed to stimulus-driven attentional capture. However, prior studies did not investigate if participants experienced the cues as useful temporal warning signals and, therefore, attended to the cues in a top-down way. Here, we tested to which extent search settings based on temporal contingencies between cue and target onset could be responsible for spatial cueing effects. Cueing effects were replicated, and we showed that removing temporal contingencies between cue and target onset did not diminish the cueing effects (Experiments 1 and 2). Neither presenting the cues in the majority of trials after target onset (Experiment 1) nor presenting cue and target unrelated to one another (Experiment 2) led to a significant reduction of the spatial cueing effects. Results thus support the hypothesis that the subliminal cues captured attention in a stimulus-driven way.
Value-based attentional capture influences context-dependent decision-making
Cha, Kexin; Rangsipat, Napat; Serences, John T.
2015-01-01
Normative theories posit that value-based decision-making is context independent. However, decisions between two high-value options can be suboptimally biased by the introduction of a third low-value option. This context-dependent modulation is consistent with the divisive normalization of the value of each stimulus by the total value of all stimuli. In addition, an independent line of research demonstrates that pairing a stimulus with a high-value outcome can lead to attentional capture that can mediate the efficiency of visual information processing. Here we tested the hypothesis that value-based attentional capture interacts with value-based normalization to influence the optimality of decision-making. We used a binary-choice paradigm in which observers selected between two targets and the color of each target indicated the magnitude of their reward potential. Observers also had to simultaneously ignore a task-irrelevant distractor rendered in a color that was previously associated with a specific reward magnitude. When the color of the task-irrelevant distractor was previously associated with a high reward, observers responded more slowly and less optimally. Moreover, as the learned value of the distractor increased, electrophysiological data revealed an attenuation of the lateralized N1 and N2Pc responses evoked by the relevant choice stimuli and an attenuation of the late positive deflection (LPD). Collectively, these behavioral and electrophysiological data suggest that value-based attentional capture and value-based normalization jointly mediate the influence of context on free-choice decision-making. PMID:25995350
Value-based attentional capture influences context-dependent decision-making.
Itthipuripat, Sirawaj; Cha, Kexin; Rangsipat, Napat; Serences, John T
2015-07-01
Normative theories posit that value-based decision-making is context independent. However, decisions between two high-value options can be suboptimally biased by the introduction of a third low-value option. This context-dependent modulation is consistent with the divisive normalization of the value of each stimulus by the total value of all stimuli. In addition, an independent line of research demonstrates that pairing a stimulus with a high-value outcome can lead to attentional capture that can mediate the efficiency of visual information processing. Here we tested the hypothesis that value-based attentional capture interacts with value-based normalization to influence the optimality of decision-making. We used a binary-choice paradigm in which observers selected between two targets and the color of each target indicated the magnitude of their reward potential. Observers also had to simultaneously ignore a task-irrelevant distractor rendered in a color that was previously associated with a specific reward magnitude. When the color of the task-irrelevant distractor was previously associated with a high reward, observers responded more slowly and less optimally. Moreover, as the learned value of the distractor increased, electrophysiological data revealed an attenuation of the lateralized N1 and N2Pc responses evoked by the relevant choice stimuli and an attenuation of the late positive deflection (LPD). Collectively, these behavioral and electrophysiological data suggest that value-based attentional capture and value-based normalization jointly mediate the influence of context on free-choice decision-making. Copyright © 2015 the American Physiological Society.
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-03-31
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-01-01
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face. PMID:28362349
Automatic detection of spermatozoa for laser capture microdissection.
Vandewoestyne, Mado; Van Hoofstat, David; Van Nieuwerburgh, Filip; Deforce, Dieter
2009-03-01
In sexual assault crimes, differential extraction of spermatozoa from vaginal swab smears is often ineffective, especially when only a few spermatozoa are present in an overwhelming amount of epithelial cells. Laser capture microdissection (LCM) enables the precise separation of spermatozoa and epithelial cells. However, standard sperm-staining techniques are non-specific and rely on sperm morphology for identification. Moreover, manual screening of the microscope slides is time-consuming and labor-intensive. Here, we describe an automated screening method to detect spermatozoa stained with Sperm HY-LITER. Different ratios of spermatozoa and epithelial cells were used to assess the automatic detection method. In addition, real postcoital samples were also screened. Detected spermatozoa were isolated using LCM and DNA analysis was performed. Robust DNA profiles without allelic dropout could be obtained from as little as 30 spermatozoa recovered from postcoital samples, showing that the staining had no significant influence on DNA recovery.
Dynamic imaging with electron microscopy
Campbell, Geoffrey; McKeown, Joe; Santala, Melissa
2018-02-13
Livermore researchers have perfected an electron microscope to study fast-evolving material processes and chemical reactions. By applying engineering, microscopy, and laser expertise to the decades-old technology of electron microscopy, the dynamic transmission electron microscope (DTEM) team has developed a technique that can capture images of phenomena that are both very small and very fast. DTEM uses a precisely timed laser pulse to achieve a short but intense electron beam for imaging. When synchronized with a dynamic event in the microscope's field of view, DTEM allows scientists to record and measure material changes in action. A new movie-mode capability, which earned a 2013 R&D 100 Award from R&D Magazine, uses up to nine laser pulses to sequentially capture fast, irreversible, even one-of-a-kind material changes at the nanometer scale. DTEM projects are advancing basic and applied materials research, including such areas as nanostructure growth, phase transformations, and chemical reactions.
Szostak, Justyna; Martin, Florian; Talikka, Marja; Peitsch, Manuel C; Hoeng, Julia
2016-01-01
The cellular and molecular mechanisms behind the process of atherosclerotic plaque destabilization are complex, and molecular data from aortic plaques are difficult to interpret. Biological network models may overcome these difficulties and precisely quantify the molecular mechanisms impacted during disease progression. The atherosclerosis plaque destabilization biological network model was constructed with the semiautomated curation pipeline, BELIEF. Cellular and molecular mechanisms promoting plaque destabilization or rupture were captured in the network model. Public transcriptomic data sets were used to demonstrate the specificity of the network model and to capture the different mechanisms that were impacted in ApoE -/- mouse aorta at 6 and 32 weeks. We concluded that network models combined with the network perturbation amplitude algorithm provide a sensitive, quantitative method to follow disease progression at the molecular level. This approach can be used to investigate and quantify molecular mechanisms during plaque progression.
Atomic sites and stability of Cs+ captured within zeolitic nanocavities
Yoshida, Kaname; Toyoura, Kazuaki; Matsunaga, Katsuyuki; Nakahira, Atsushi; Kurata, Hiroki; Ikuhara, Yumi H.; Sasaki, Yukichi
2013-01-01
Zeolites have potential application as ion-exchangers, catalysts and molecular sieves. Zeolites are once again drawing attention in Japan as stable adsorbents and solidification materials of fission products, such as 137Cs+ from damaged nuclear-power plants. Although there is a long history of scientific studies on the crystal structures and ion-exchange properties of zeolites for practical application, there are still open questions, at the atomic-level, on the physical and chemical origins of selective ion-exchange abilities of different cations and detailed atomic structures of exchanged cations inside the nanoscale cavities of zeolites. Here, the precise locations of Cs+ ions captured within A-type zeolite were analyzed using high-resolution electron microscopy. Together with theoretical calculations, the stable positions of absorbed Cs+ ions in the nanocavities are identified, and the bonding environment within the zeolitic framework is revealed to be a key factor that influences the locations of absorbed cations. PMID:23949184
Swartman, B; Frere, D; Wei, W; Schnetzke, M; Beisemann, N; Keil, H; Franke, J; Grützner, P A; Vetter, S Y
2017-10-01
A new software application can be used without fixed reference markers or a registration process in wire placement. The aim was to compare placement of Kirschner wires (K-wires) into the proximal femur with the software application versus the conventional method without guiding. As study hypothesis, we assumed less placement attempts, shorter procedure time and shorter fluoroscopy time using the software. The same precision inside a proximal femur bone model using the software application was premised. The software detects a K-wire within the 2D fluoroscopic image. By evaluating its direction and tip location, it superimposes a trajectory on the image, visualizing the intended direction of the K-wire. The K-wire was positioned in 20 artificial bones with the use of software by one surgeon; 20 bones served as conventional controls. A brass thumb tack was placed into the femoral head and its tip targeted with the wire. Number of placement attempts, duration of the procedure, duration of fluoroscopy time and distance to the target in a postoperative 3D scan were recorded. Compared with the conventional method, use of the application showed fewer attempts for optimal wire placement (p=0.026), shorter duration of surgery (p=0.004), shorter fluoroscopy time (p=0.024) and higher precision (p=0.018). Final wire position was achieved in the first attempt in 17 out of 20 cases with the software and in 9 out of 20 cases with the conventional method. The study hypothesis was confirmed. The new application optimised the process of K-wire placement in the proximal femur in an artificial bone model while also improving precision. Benefits lie especially in the reduction of placement attempts and reduction of fluoroscopy time under the aspect of radiation protection. The software runs on a conventional image intensifier and can therefore be easily integrated into the daily surgical routine. Copyright © 2017 Elsevier Ltd. All rights reserved.
Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul
2015-01-01
Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.
Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul
2015-01-01
Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231
Precision and relative effectiveness of a purse seine for sampling age-0 river herring in lakes
Devine, Matthew T.; Roy, Allison; Whiteley, Andrew R.; Gahagan, Benjamin I.; Armstrong, Michael P.; Jordaan, Adrian
2018-01-01
Stock assessments for anadromous river herring, collectively Alewife Alosa pseudoharengus and Blueback Herring A. aestivalis, lack adequate demographic information, particularly with respect to early life stages. Although sampling adult river herring is increasingly common throughout their range, currently no standardized, field‐based, analytical methods exist for estimating juvenile abundance in freshwater lakes. The objective of this research was to evaluate the relative effectiveness and sampling precision of a purse seine for estimating densities of age‐0 river herring in freshwater lakes. We used a purse seine to sample age‐0 river herring in June–September 2015 and June–July 2016 in 16 coastal freshwater lakes in the northeastern USA. Sampling effort varied from two seine hauls to more than 50 seine hauls per lake. Catch rates were highest in June and July, and sampling precision was maximized in July. Sampling at night (versus day) in open water (versus littoral areas) was most effective for capturing newly hatched larvae and juveniles up to ca. 100 mm TL. Bootstrap simulation results indicated that sampling precision of CPUE estimates increased with sampling effort, and there was a clear threshold beyond which increased effort resulted in negligible increases in precision. The effort required to produce precise CPUE estimates, as determined by the CV, was dependent on lake size; river herring densities could be estimated with up to 10 purse‐seine hauls (one‐two nights) in a small lake (<50 ha) and 15–20 hauls (two‐three nights) in a large lake (>50 ha). Fish collection techniques using a purse seine as described in this paper are likely to be effective for estimating recruit abundance of river herring in freshwater lakes across their range.
Critical Care and Personalized or Precision Medicine: Who needs whom?
Sugeir, Shihab; Naylor, Stephen
2018-02-01
The current paradigm of modern healthcare is a reactive response to patient symptoms, subsequent diagnosis and corresponding treatment of the specific disease(s). This approach is predicated on methodologies first espoused by the Cnidean School of Medicine approximately 2500years ago. More recently escalating healthcare costs and relatively poor disease treatment outcomes have fermented a rethink in how we carry out medical practices. This has led to the emergence of "P-Medicine" in the form of Personalized and Precision Medicine. The terms are used interchangeably, but in fact there are significant differences in the way they are implemented. The former relies on an "N-of-1" model whereas the latter uses a "1-in-N" model. Personalized Medicine is still in a fledgling and evolutionary phase and there has been much debate over its current status and future prospects. A confounding factor has been the sudden development of Precision Medicine, which has currently captured the imagination of policymakers responsible for modern healthcare systems. There is some confusion over the terms Personalized versus Precision Medicine. Here we attempt to define the key differences and working definitions of each P-Medicine approach, as well as a taxonomic relationship tree. Finally, we discuss the impact of Personalized and Precision Medicine on the practice of Critical Care Medicine (CCM). Practitioners of CCM have been participating in Personalized Medicine unknowingly as it takes the protocols of sepsis, mechanical ventilation, and daily awakening trials and applies it to each individual patient. However, the immediate next step for CCM should be an active development of Precision Medicine. This developmental process should break down the silos of modern medicine and create a multidisciplinary approach between clinicians and basic/translational scientists. Copyright © 2017 Elsevier Inc. All rights reserved.
Zooming in on neutrino oscillations with DUNE
NASA Astrophysics Data System (ADS)
Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.
2018-05-01
We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.
Grabar, Natalia; Krivine, Sonia; Jaulent, Marie-Christine
2007-10-11
Making the distinction between expert and non expert health documents can help users to select the information which is more suitable for them, according to whether they are familiar or not with medical terminology. This issue is particularly important for the information retrieval area. In our work we address this purpose through stylistic corpus analysis and the application of machine learning algorithms. Our hypothesis is that this distinction can be performed on the basis of a small number of features and that such features can be language and domain independent. The used features were acquired in source corpus (Russian language, diabetes topic) and then tested on target (French language, pneumology topic) and source corpora. These cross-language features show 90% precision and 93% recall with non expert documents in source language; and 85% precision and 74% recall with expert documents in target language.
Spatial capture–recapture with partial identity: An application to camera traps
Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.
2018-01-01
Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.
Simulation of Light Collection for Neutron Electrical Dipole Moment measurement
NASA Astrophysics Data System (ADS)
Ji, Pan; nEDM Collaboration
2017-09-01
nEDM (Neutron Electrical Dipole moment) measurement addresses a critical topic in particle physics and Standard Model, that is CPT violation in neutron electrical dipole moment if detected in which the Time reversal violation is connected to the matter/antimatter imparity of the universe. The neutron electric dipole moment was first measured in 1950 by Smith, Purcell, and Ramsey at the Oak Ridge Reactor - the first intense neutron source. This measurement showed that the neutron was very nearly round (to better than one part in a million). The goal of the nEDM experiment is to further improve the precision of this measurement by another factor of 100. The signal from the experiment is detected by collecting the photons generated when neutron beams were captured by liquid helium 3. The Geant4 simulation project that I participate simulates the process of light collection to improve the design for higher capture efficiency. The simulated geometry includes light source, reflector, wavelength shifting fibers, wavelength shifting TPB and acrylic as in real experiment. The UV photons exiting from Helium go through two wavelength-shifting processes in TPB and fibers to be finally captured. Oak Ridge National Laboratory Neutron Electric Dipole Moment measurement project.
Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks.
Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O; Cohn, Emily; Mekaru, Sumiko R; Brownstein, John S; Ramakrishnan, Naren
2017-01-19
In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.
Hou, Jianwen; Cui, Lele; Chen, Runhai; Xu, Xiaodong; Chen, Jiayue; Yin, Ligang; Liu, Jingchuan; Shi, Qiang; Yin, Jinghua
2018-03-01
A versatile platform allowing capture and detection of normal and dysfunctional cells on the same patterned surface is important for accessing the cellular mechanism, developing diagnostic assays, and implementing therapy. Here, an original and effective method for fabricating binary polymer brushes pattern is developed for controlled cell adhesion. The binary polymer brushes pattern, composed of poly(N-isopropylacrylamide) (PNIPAAm) and poly[poly(ethylene glycol) methyl ether methacrylate] (POEGMA) chains, is simply obtained via a combination of surface-initiated photopolymerization and surface-activated free radical polymerization. This method is unique in that it does not utilize any protecting groups or procedures of backfilling with immobilized initiator. It is demonstrated that the precise and well-defined binary polymer patterns with high resolution are fabricated using this facile method. PNIPAAm chains capture and release cells by thermoresponsiveness, while POEGMA chains possess high capability to capture dysfunctional cells specifically, inducing a switch of normal red blood cells (RBCs) arrays to hemolytic RBCs arrays on the pattern with temperature. This novel platform composed of binary polymer brush pattern is smart and versatile, which opens up pathways to potential applications as microsensors, biochips, and bioassays. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks
NASA Astrophysics Data System (ADS)
Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren
2017-01-01
In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.
Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks
Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren
2017-01-01
In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations. PMID:28102319
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
Xu, Rong; Wang, Quanqiu
2014-02-01
Targeted drugs dramatically improve the treatment outcomes in cancer patients; however, these innovative drugs are often associated with unexpectedly high cardiovascular toxicity. Currently, cardiovascular safety represents both a challenging issue for drug developers, regulators, researchers, and clinicians and a concern for patients. While FDA drug labels have captured many of these events, spontaneous reporting systems are a main source for post-marketing drug safety surveillance in 'real-world' (outside of clinical trials) cancer patients. In this study, we present approaches to extracting, prioritizing, filtering, and confirming cardiovascular events associated with targeted cancer drugs from the FDA Adverse Event Reporting System (FAERS). The dataset includes records of 4,285,097 patients from FAERS. We first extracted drug-cardiovascular event (drug-CV) pairs from FAERS through named entity recognition and mapping processes. We then compared six ranking algorithms in prioritizing true positive signals among extracted pairs using known drug-CV pairs derived from FDA drug labels. We also developed three filtering algorithms to further improve precision. Finally, we manually validated extracted drug-CV pairs using 21 million published MEDLINE records. We extracted a total of 11,173 drug-CV pairs from FAERS. We showed that ranking by frequency is significantly more effective than by the five standard signal detection methods (246% improvement in precision for top-ranked pairs). The filtering algorithm we developed further improved overall precision by 91.3%. By manual curation using literature evidence, we show that about 51.9% of the 617 drug-CV pairs that appeared in both FAERS and MEDLINE sentences are true positives. In addition, 80.6% of these positive pairs have not been captured by FDA drug labeling. The unique drug-CV association dataset that we created based on FAERS could facilitate our understanding and prediction of cardiotoxic events associated with targeted cancer drugs. Copyright © 2013 Elsevier Inc. All rights reserved.
Leuthaeuser, Janelle B; Knutson, Stacy T; Kumar, Kiran; Babbitt, Patricia C; Fetrow, Jacquelyn S
2015-09-01
The development of accurate protein function annotation methods has emerged as a major unsolved biological problem. Protein similarity networks, one approach to function annotation via annotation transfer, group proteins into similarity-based clusters. An underlying assumption is that the edge metric used to identify such clusters correlates with functional information. In this contribution, this assumption is evaluated by observing topologies in similarity networks using three different edge metrics: sequence (BLAST), structure (TM-Align), and active site similarity (active site profiling, implemented in DASP). Network topologies for four well-studied protein superfamilies (enolase, peroxiredoxin (Prx), glutathione transferase (GST), and crotonase) were compared with curated functional hierarchies and structure. As expected, network topology differs, depending on edge metric; comparison of topologies provides valuable information on structure/function relationships. Subnetworks based on active site similarity correlate with known functional hierarchies at a single edge threshold more often than sequence- or structure-based networks. Sequence- and structure-based networks are useful for identifying sequence and domain similarities and differences; therefore, it is important to consider the clustering goal before deciding appropriate edge metric. Further, conserved active site residues identified in enolase and GST active site subnetworks correspond with published functionally important residues. Extension of this analysis yields predictions of functionally determinant residues for GST subgroups. These results support the hypothesis that active site similarity-based networks reveal clusters that share functional details and lay the foundation for capturing functionally relevant hierarchies using an approach that is both automatable and can deliver greater precision in function annotation than current similarity-based methods. © 2015 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Modeling the North American vertical datum of 1988 errors in the conterminous United States
NASA Astrophysics Data System (ADS)
Li, X.
2018-02-01
A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.
Leuthaeuser, Janelle B; Knutson, Stacy T; Kumar, Kiran; Babbitt, Patricia C; Fetrow, Jacquelyn S
2015-01-01
The development of accurate protein function annotation methods has emerged as a major unsolved biological problem. Protein similarity networks, one approach to function annotation via annotation transfer, group proteins into similarity-based clusters. An underlying assumption is that the edge metric used to identify such clusters correlates with functional information. In this contribution, this assumption is evaluated by observing topologies in similarity networks using three different edge metrics: sequence (BLAST), structure (TM-Align), and active site similarity (active site profiling, implemented in DASP). Network topologies for four well-studied protein superfamilies (enolase, peroxiredoxin (Prx), glutathione transferase (GST), and crotonase) were compared with curated functional hierarchies and structure. As expected, network topology differs, depending on edge metric; comparison of topologies provides valuable information on structure/function relationships. Subnetworks based on active site similarity correlate with known functional hierarchies at a single edge threshold more often than sequence- or structure-based networks. Sequence- and structure-based networks are useful for identifying sequence and domain similarities and differences; therefore, it is important to consider the clustering goal before deciding appropriate edge metric. Further, conserved active site residues identified in enolase and GST active site subnetworks correspond with published functionally important residues. Extension of this analysis yields predictions of functionally determinant residues for GST subgroups. These results support the hypothesis that active site similarity-based networks reveal clusters that share functional details and lay the foundation for capturing functionally relevant hierarchies using an approach that is both automatable and can deliver greater precision in function annotation than current similarity-based methods. PMID:26073648
Van Wassenbergh, Sam; Lieben, Tim; Herrel, Anthony; Huysentruyt, Frank; Geerinckx, Tom; Adriaens, Dominique; Aerts, Peter
2009-01-01
Food scraping has independently evolved twice from suction feeding in the evolution of catfishes: within neotropical Loricarioidea and paleotropical Mochokidae. To gain insight in the evolutionary transitions associated with the evolution towards scraping, we analyzed prey capture kinematics in two species of benthic suction feeders which belong to taxa that are closely related to the scraper lineages (respectively, Corydoras splendens and Synodontis multipunctatus), and compared it to prey capture in a more distantly related, generalist suction feeder (Clarias gariepinus). Simultaneous ventral and lateral view high-speed videos were recorded to quantify the movements of the lower jaw, hyoid, pectoral girdle and neurocranium. Additionally, ellipse modeling was applied to relate head shape differences to buccal expansion kinematics. Similarly to what has been observed in scrapers, rotations of the neurocranium are minimal in the benthic suction feeders, and may consequently have facilitated the evolution of a scraping feeding mechanism. The hypothesis that fish with a more laterally compressed head rely more heavily on lateral expansion of the buccal cavity to generate suction, was confirmed in our sample of catfish species. Since an important contribution of lateral expansion of the head to suction may avoid the need for a strong, ventral depression of the mouth floor during feeding, we hypothesized that this may have allowed a closer association with the substrate in the ancestors of scrapers. However, our hypothesis was not supported by an ancestral state reconstruction, which suggests that scraping probably evolved from sub-terminal mouthed ancestors with dorsoventrally flattened heads.
Fernández-Nestosa, María J; Guimerà, Nuria; Sanchez, Diego F; Cañete-Portillo, Sofía; Velazquez, Elsa F; Jenkins, David; Quint, Wim; Cubilla, Antonio L
2017-06-01
Laser capture microdissection-polymerase chain reaction (LCM-PCR) supported by p16 was used for the first time to demonstrate human papillomavirus (HPV) DNA in histologically specific penile lesions, which were as follows: squamous hyperplasia (12 lesions, 10 patients), flat lesions (12 lesions, 5 patients), condylomas (26 lesions, 7 patients), penile intraepithelial neoplasia (PeIN) (115 lesions, 43 patients), and invasive squamous cell carcinomas (26 lesions, 26 patients). HPV was detected by whole-tissue section and LCM-PCR. LCM proved to be more precise than whole-tissue section in assigning individual genotypes to specific lesions. HPV was negative or very infrequent in squamous hyperplasia, differentiated PeIN, and low-grade keratinizing variants of carcinomas. HPV was strongly associated with condylomas, warty/basaloid PeIN, adjacent flat lesions, and warty/basaloid carcinomas. A single HPV genotype was found in each lesion. Some condylomas and flat lesions, especially those with atypia, were preferentially associated with high-risk HPV. Unlike invasive carcinoma, in which few genotypes of HPV were involved, there were 18 HPV genotypes in PeIN, usually HPV 16 in basaloid PeIN but marked HPV heterogeneity in warty PeIN (11 different genotypes). Variable and multiple HPV genotypes were found in multicentric PeIN, whereas unicentric PeIN was usually related to a single genotype. There was a correspondence among HPV genotypes in invasive and associated PeIN. p16 was positive in the majority of HPV-positive lesions except condylomas containing LR-HPV. p16 was usually negative in squamous hyperplasia, differentiated PeIN, and low-grade keratinizing variants of squamous cell carcinomas. In summary, we demonstrated that LCM-PCR was a superior research technique for investigating HPV genotypes in intraepithelial lesions. A significant finding was the heterogeneity of HPV genotypes in PeIN and the differential association of HPV genotypes with subtypes of PeIN. The presence of atypia and high-risk HPV in condylomas and adjacent flat lesions suggests a precursor role, and the correspondence of HPV genotypes in invasive carcinomas and associated PeIN indicates a causal relation. Data presented support the bimodal hypothesis of penile cancer carcinogenesis in HPV-driven and non-HPV-driven carcinomas and justify the current WHO pathologic classification of PeIN in special subtypes.
Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software
NASA Technical Reports Server (NTRS)
Ruiz, Ronald P.
2003-01-01
Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).