Reliability Assessment of Reconfigurable Flight Control Systems Using Sure and Assist
NASA Technical Reports Server (NTRS)
Wu, N. Eva
1992-01-01
This paper presents a reliability assessment of Reconfigurable Flight Control Systems using Semi-Markov Unreliability Range Evaluator (SURE) and Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST).
The Proper Sequence for Correcting Correlation Coefficients for Range Restriction and Unreliability.
ERIC Educational Resources Information Center
Stauffer, Joseph M.; Mendoza, Jorge L.
2001-01-01
Uses classical test theory to show that it is the nature of the range restriction, rather than the nature of the available reliability coefficient, that determines the sequence for applying corrections for range restriction and unreliability. Shows how the common rule of thumb for choosing the sequence is tenable only when the correction does not…
System Study: Emergency Power System 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the emergency power system (EPS) at 104 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period while yearly estimates for system unreliability are provided for the entire active period. An extremely statistically significant increasing trend was observed for EPS system unreliability for an 8-hour mission. A statistically significant increasing trend was observed for EPS system start-onlymore » unreliability.« less
System Study: Isolation Condenser 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the isolation condenser (ISO) system at four U.S. boiling water reactors. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing trends were identified. A statistically significant decreasing trend was identified for ISO unreliability. The magnitude of the trend indicated a 1.5 percent decrease inmore » system unreliability over the last 10 years.« less
System Study: Residual Heat Removal 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the residual heat removal (RHR) system in two modes of operation (low-pressure injection in response to a large loss-of-coolant accident and post-trip shutdown-cooling) at 104 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing trends were identified in themore » RHR results. A highly statistically significant decreasing trend was observed for the RHR injection mode start-only unreliability. Statistically significant decreasing trends were observed for RHR shutdown cooling mode start-only unreliability and RHR shutdown cooling model 24-hour unreliability.« less
System Study: Emergency Power System 1998–2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-02-01
This report presents an unreliability evaluation of the emergency power system (EPS) at 104 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2013 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10-year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant trends were identified in the EPS results.
System Study: Reactor Core Isolation Cooling 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the reactor core isolation cooling (RCIC) system at 31 U.S. commercial boiling water reactors. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant trends were identified in the RCIC results.
System Study: Auxiliary Feedwater 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the auxiliary feedwater (AFW) system at 69 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing or decreasing trends were identified in the AFW results.
System Study: High-Pressure Coolant Injection 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the high-pressure coolant injection system (HPCI) at 25 U.S. commercial boiling water reactors. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing or decreasing trends were identified in the HPCI results.
System Study: High-Pressure Core Spray 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the high-pressure core spray (HPCS) at eight U.S. commercial boiling water reactors. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing or decreasing trends were identified in the HPCS results.
System Study: High-Pressure Safety Injection 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the high-pressure safety injection system (HPSI) at 69 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant increasing or decreasing trends were identified in the HPSI results.
[How valid are student self-reports of bullying in schools?].
Morbitzer, Petra; Spröber, Nina; Hautzinger, Martin
2009-01-01
In this study we examine the reliability and validity of students' self-reports about bullying and victimization in schools. 208 5th class students of four "middle schools" in Southern Germany filled in the Bully-Victim-Questionnaire (Olweus, 1989, adapted by Lösel, Bliesener, Averbeck, 1997) and the School Climate Survey (Brockenborough, 2001) to assess the prevalence of bullying/victimization, and to evaluate attitudes towards aggression and support for victims. By using reliability and validity criteria, one third (31%) of the questionnaires was classified as "unreliable/invalid". Mean comparisons of the "unreliable/invalid" group and the "valid" group of the subscales concerning bullying/victimization found significant differences. The "unreliable/invalid" group stated higher values of bullying and victimization. Based on the "unreliable/invalid" questionnaires more students could be identified as bullies/victims or bully-victims. The prevalence of bullying/victimization in the whole sample was reduced if "unreliable/invalid" questionnaires were excluded. The results are discussed in the framework of theories about the presentation of the self ("impression management', "social desirability") and systematic response patterns ("extreme response bias").
Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell
2014-05-01
The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.
MASQOT: a method for cDNA microarray spot quality control
Bylesjö, Max; Eriksson, Daniel; Sjödin, Andreas; Sjöström, Michael; Jansson, Stefan; Antti, Henrik; Trygg, Johan
2005-01-01
Background cDNA microarray technology has emerged as a major player in the parallel detection of biomolecules, but still suffers from fundamental technical problems. Identifying and removing unreliable data is crucial to prevent the risk of receiving illusive analysis results. Visual assessment of spot quality is still a common procedure, despite the time-consuming work of manually inspecting spots in the range of hundreds of thousands or more. Results A novel methodology for cDNA microarray spot quality control is outlined. Multivariate discriminant analysis was used to assess spot quality based on existing and novel descriptors. The presented methodology displays high reproducibility and was found superior in identifying unreliable data compared to other evaluated methodologies. Conclusion The proposed methodology for cDNA microarray spot quality control generates non-discrete values of spot quality which can be utilized as weights in subsequent analysis procedures as well as to discard spots of undesired quality using the suggested threshold values. The MASQOT approach provides a consistent assessment of spot quality and can be considered an alternative to the labor-intensive manual quality assessment process. PMID:16223442
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T
2007-03-01
Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.
Alternatives to Piloting Textbooks.
ERIC Educational Resources Information Center
Muther, Connie
1985-01-01
Using short-term pilot programs to evaluate textbooks can lead to unreliable results and interfere with effective education. Alternative methods for evaluating textbook-based programs include obtaining documented analyses of competitors' products from sales agents, visiting districts using programs being considered, and examining publishers' own…
Taking Teacher Quality Seriously: A Collaborative Approach to Teacher Evaluation
ERIC Educational Resources Information Center
Karp, Stan
2012-01-01
If narrow, test-based evaluation of teachers is unfair, unreliable, and has negative effects on kids, classrooms, and curricula, what's a better approach? By demonizing teachers and unions, and sharply polarizing the education debate, the corporate reform movement has actually undermined serious efforts to improve teacher quality and evaluation.…
Optimizing Air Transportation Service to Metroplex Airports. Part 1; Analysis of Historical Data
NASA Technical Reports Server (NTRS)
Donohue, George; Hoffman, Karla; Sherry, Lance; Ferguson, John; Kara, Abdul Qadar
2010-01-01
The air transportation system is a significant driver of the U.S. economy, providing safe, affordable, and rapid transportation. During the past three decades airspace and airport capacity has not grown in step with demand for air transportation (+4% annual growth), resulting in unreliable service and systemic delays. Estimates of the impact of delays and unreliable air transportation service on the economy range from $32B to $41B per year. This report describes the results of an analysis of airline strategic decision-making with regards to: (1) geographic access, (2) economic access, and (3) airline finances. This analysis evaluated markets-served, scheduled flights, aircraft size, airfares, and profit from 2005-2009. During this period, airlines experienced changes in costs of operation (due to fluctuations in hedged fuel prices), changes in travel demand (due to changes in the economy), and changes in infrastructure capacity (due to the capacity limits at EWR, JFK, and LGA). This analysis captures the impact of the implementation of capacity limits at airports, as well as the effect of increased costs of operation (i.e. hedged fuel prices). The increases in costs of operation serve as a proxy for increased costs per flight that might occur if auctions or congestion pricing are imposed.
Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.
Herrero, David; Martínez, Humberto
2011-01-01
This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.
Geologic and hydraulic characteristics of selected shaly geologic units in Oklahoma
Becker, C.J.; Overton, M.D.; Johnson, K.S.; Luza, K.V.
1997-01-01
Information was collected on the geologic and hydraulic characteristics of three shale-dominated units in Oklahoma-the Dog Creek Shale and Chickasha Formation in Canadian County, Hennessey Group in Oklahoma County, and the Boggy Formation in Pittsburg County. The purpose of this project was to gain insight into the characteristics controlling fluid flow in shaly units that could be targeted for confinement of hazardous waste in the State and to evaluate methods of measuring hydraulic characteristics of shales. Permeameter results may not indicate in-place small-scale hydraulic characteristics, due to pretest disturbance and deterioration of core samples. The Dog Creek Shale and Chickasha Formation hydraulic conductivities measured by permeameter methods ranged from 2.8 times 10 to the negative 11 to 3.0 times 10 to the negative 7 meter per second in nine samples and specific storage from 3.3 times 10 to the negative 4 to 1.6 times 10 to the negative 3 per meter in four samples. Hennessey Group hydraulic conductivities ranged from 4.0 times 10 to the negative 12 to 4.0 times 10 to the negative 10 meter per second in eight samples. Hydraulic conductivity in the Boggy Formation ranged from 1.7 times 10 to the negative 12 to 1.0 times 10 to the negative 8 meter per second in 17 samples. The hydraulic properties of isolated borehole intervals of average length of 4.5 meters in the Hennessey Group and the Boggy Formation were evaluated by a pressurized slug-test method. Hydraulic conductivities obtained with this method tend to be low because intervals with features that transmitted large volumes of water were not tested. Hennessey Group hydraulic conductivities measured by this method ranged from 3.0 times 10 to the negative 13 to 1.1 times 10 to the negative 9 meter per second; the specific storage values are small and may be unreliable. Boggy Formation hydraulic conductivities ranged from 2.0 times 10 to the negative 13 to 2.7 times 10 to the negative 10 meter per second and specific storage values in these tests also are small and may be unreliable. A substantially higher hydraulic conductivity of 3.0 times 10 to the negative 8 meter per second was measured in one borehole 30 meters deep in the Boggy Formation using an open hole slug-test method.
Field Study of Stress: Psychophysiological Measures During Project Supex.
1978-10-01
recordings proved to be unreliable utilizing the current procedures. The perceived scales evaluated the current state of the individual, but they were not good predictors of performance or heart rate activity. (Author)
NASA Astrophysics Data System (ADS)
alhilman, Judi
2017-12-01
In the production line process of the printing office, the reliability of the printing machine plays a very important role, if the machine fail it can disrupt production target so that the company will suffer huge financial loss. One method to calculate the financial loss cause by machine failure is use the Cost of Unreliability(COUR) method. COUR method works based on down time machine and costs associated with unreliability data. Based on the calculation of COUR method, so the sum of cost due to unreliability printing machine during active repair time and downtime is 1003,747.00.
Development and Evaluation of Smart Bus System
DOT National Transportation Integrated Search
2016-12-13
Due to stochastic traffic conditions and fluctuated demand, transit passengers often suffer from unreliable services. Especially for buses, keeping on-time schedules is challenging as they share the right of way with non-transit traffic. With the adv...
Recent Research on Children's Testimony about Experienced and Witnessed Events
ERIC Educational Resources Information Center
Pipe, M.E.; Lamb, M.E.; Orbach, Y.; Esplin, P.W.
2004-01-01
Research on memory development has increasingly moved out of the laboratory and into the real world. Whereas early researchers asked whether confusion and susceptibility to suggestion made children unreliable witnesses, furthermore, contemporary researchers are addressing a much broader range of questions about children's memory, focusing not only…
A New Framework of Happiness Survey and Evaluation of National Wellbeing
ERIC Educational Resources Information Center
Zhou, Haiou
2012-01-01
Happiness surveys based on self-reporting may generate unreliable data due to respondents' imperfect retrospection, vulnerability to context and arbitrariness in measuring happiness. To overcome these problems, this paper proposes to combine a happiness evaluation method developed by Ng (Soc Indic Res, 38:1-29, 1996) with the day reconstruction…
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E
2018-01-16
The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .
How Students Evaluate Information and Sources when Searching the World Wide Web for Information
ERIC Educational Resources Information Center
Walraven, Amber; Brand-Gruwel, Saskia; Boshuizen, Henny P. A.
2009-01-01
The World Wide Web (WWW) has become the biggest information source for students while solving information problems for school projects. Since anyone can post anything on the WWW, information is often unreliable or incomplete, and it is important to evaluate sources and information before using them. Earlier research has shown that students have…
Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation
ERIC Educational Resources Information Center
Grimes, Douglas; Warschauer, Mark
2010-01-01
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Vlastarakos, Petros V; Vasileiou, Alexandra; Nikolopoulos, Thomas P
2017-12-01
We conducted an analysis to assess the relative contribution of auditory brainstem response (ABR) testing and auditory steady-state response (ASSR) testing in providing appropriate hearing aid fitting in hearing-impaired children with difficult or unreliable behavioral audiometry. Of 150 infants and children who had been referred to us for hearing assessment as part of a neonatal hearing screening and cochlear implantation program, we identified 5 who exhibited significant discrepancies between click-ABR and ASSR testing results and difficult or unreliable behavioral audiometry. Hearing aid fitting in pediatric cochlear implant candidates for a trial period of 3 to 6 months is a common practice in many implant programs, but monitoring the progress of the amplified infants and providing appropriate hearing aid fitting can be challenging. If we accept the premise that we can assess the linguistic progress of amplified infants with an acceptable degree of certainty, the auditory behavior that we are monitoring presupposes appropriate bilateral hearing aid fitting. This may become very challenging in young children, or even in older children with difficult or unreliable behavioral audiometry results. This challenge can be addressed by using data from both ABR and ASSR testing. Fitting attempts that employ data from only ABR testing provide amplification that involves the range of spoken language but is not frequency-specific. Hearing aid fitting should also incorporate and take into account ASSR data because reliance on ABR testing alone might compromise the validity of the monitoring process. In conclusion, we believe that ASSR threshold-based bilateral hearing aid fitting is necessary to provide frequency-specific amplification of hearing and appropriate propulsion in the prelinguistic vocalizations of monitored infants.
Extrinsic and intrinsic motivation at 30: Unresolved scientific issues.
Reiss, Steven
2005-01-01
The undermining effect of extrinsic reward on intrinsic motivation remains unproven. The key unresolved issues are construct invalidity (all four definitions are unproved and two are illogical); measurement unreliability (the free-choice measure requires unreliable, subjective judgments to infer intrinsic motivation); inadequate experimental controls (negative affect and novelty, not cognitive evaluation, may explain "undermining" effects); and biased metareviews (studies with possible floor effects excluded, but those with possible ceiling effects included). Perhaps the greatest error with the undermining theory, however, is that it does not adequately recognize the multifaceted nature of intrinsic motivation (Reiss, 2004a). Advice to limit the use of applied behavior analysis based on "hidden" undermining effects is ideologically inspired and is unsupported by credible scientific evidence.
Kishor Bhattarai; Shaun Bushman; Douglas A. Johnson; John G. Carman
2011-01-01
Few North American legumes are available for use in rangeland revegetation in the western USA, but Searls prairie clover [Dalea searlsiae (A. Gray) Barneby] is one that holds promise. Commercial-scale seed production of this species could address the issues of unreliable seed availability and high seed costs associated with its wildland seed collection. To evaluate its...
Preschoolers Mistrust Ignorant and Inaccurate Speakers
ERIC Educational Resources Information Center
Koenig, Melissa A.; Harris, Paul L.
2005-01-01
Being able to evaluate the accuracy of an informant is essential to communication. Three experiments explored preschoolers' (N=119) understanding that, in cases of conflict, information from reliable informants is preferable to information from unreliable informants. In Experiment 1, children were presented with previously accurate and inaccurate…
Humans treat unreliable filled-in percepts as more real than veridical ones
Ehinger, Benedikt V; Häusser, Katja; Ossandón, José P; König, Peter
2017-01-01
Humans often evaluate sensory signals according to their reliability for optimal decision-making. However, how do we evaluate percepts generated in the absence of direct input that are, therefore, completely unreliable? Here, we utilize the phenomenon of filling-in occurring at the physiological blind-spots to compare partially inferred and veridical percepts. Subjects chose between stimuli that elicit filling-in, and perceptually equivalent ones presented outside the blind-spots, looking for a Gabor stimulus without a small orthogonal inset. In ambiguous conditions, when the stimuli were physically identical and the inset was absent in both, subjects behaved opposite to optimal, preferring the blind-spot stimulus as the better example of a collinear stimulus, even though no relevant veridical information was available. Thus, a percept that is partially inferred is paradoxically considered more reliable than a percept based on external input. In other words: Humans treat filled-in inferred percepts as more real than veridical ones. DOI: http://dx.doi.org/10.7554/eLife.21761.001 PMID:28506359
Optical Coherence Tomography Evaluation in the Multicenter Uveitis Steroid Treatment (MUST) Trial
Domalpally, Amitha; Altaweel, Michael M.; Kempen, John H.; Myers, Dawn; Davis, Janet L; Foster, C Stephen; Latkany, Paul; Srivastava, Sunil K.; Stawell, Richard J.; Holbrook, Janet T.
2013-01-01
Purpose To describe the evaluation of optical coherence tomography (OCT) scans in the Muliticenter Uveitis Steroid Treatment (MUST) trial and report baseline OCT features of enrolled participants. Methods Time domain OCTs acquired by certified photographers using a standardized scan protocol were evaluated at a Reading Center. Accuracy of retinal thickness data was confirmed with quality evaluation and caliper measurement of centerpoint thickness (CPT) was performed when unreliable. Morphological evaluation included cysts, subretinal fluid,epiretinal membranes (ERMs),and vitreomacular traction. Results Of the 453 OCTs evaluated, automated retinal thickness was accurate in 69.5% of scans, caliper measurement was performed in 26%,and 4% were ungradable. Intraclass correlation was 0.98 for reproducibility of caliper measurement. Macular edema (centerpoint thickness ≥ 240um) was present in 36%. Cysts were present in 36.6% of scans and ERMs in 27.8%, predominantly central. Intergrader agreement ranged from 78 − 82% for morphological features. Conclusion Retinal thickness data can be retrieved in a majority of OCT scans in clinical trial submissions for uveitis studies. Small cysts and ERMs involving the center are common in intermediate and posterior/panuveitis requiring systemic corticosteroid therapy. PMID:23163490
Jacquot, Amélie; Eskenazi, Terry; Sales-Wuillemin, Edith; Montalan, Benoît; Proust, Joëlle; Grèzes, Julie; Conty, Laurence
2015-01-01
Through metacognitive evaluations, individuals assess their own cognitive operations with respect to their current goals. We have previously shown that non-verbal social cues spontaneously influence these evaluations, even when the cues are unreliable. Here, we explore whether a belief about the reliability of the source can modulate this form of social impact. Participants performed a two-alternative forced choice task that varied in difficulty. The task was followed by a video of a person who was presented as being either competent or incompetent at performing the task. That person provided random feedback to the participant through facial expressions indicating agreement, disagreement or uncertainty. Participants then provided a metacognitive evaluation by rating their confidence in their answer. Results revealed that participants' confidence was higher following agreements. Interestingly, this effect was merely reduced but not canceled for the incompetent individual, even though participants were able to perceive the individual's incompetence. Moreover, perceived agreement induced zygomaticus activity, but only when the feedback was provided for difficult trials by the competent individual. This last result strongly suggests that people implicitly appraise the relevance of social feedback with respect to their current goal. Together, our findings suggest that people always integrate social agreement into their metacognitive evaluations, even when epistemic vigilance mechanisms alert them to the risk of being misinformed.
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Hart-Smith, Gene; Yagoub, Daniel; Tay, Aidan P.; Pickford, Russell; Wilkins, Marc R.
2016-01-01
All large scale LC-MS/MS post-translational methylation site discovery experiments require methylpeptide spectrum matches (methyl-PSMs) to be identified at acceptably low false discovery rates (FDRs). To meet estimated methyl-PSM FDRs, methyl-PSM filtering criteria are often determined using the target-decoy approach. The efficacy of this methyl-PSM filtering approach has, however, yet to be thoroughly evaluated. Here, we conduct a systematic analysis of methyl-PSM FDRs across a range of sample preparation workflows (each differing in their exposure to the alcohols methanol and isopropyl alcohol) and mass spectrometric instrument platforms (each employing a different mode of MS/MS dissociation). Through 13CD3-methionine labeling (heavy-methyl SILAC) of Saccharomyces cerevisiae cells and in-depth manual data inspection, accurate lists of true positive methyl-PSMs were determined, allowing methyl-PSM FDRs to be compared with target-decoy approach-derived methyl-PSM FDR estimates. These results show that global FDR estimates produce extremely unreliable methyl-PSM filtering criteria; we demonstrate that this is an unavoidable consequence of the high number of amino acid combinations capable of producing peptide sequences that are isobaric to methylated peptides of a different sequence. Separate methyl-PSM FDR estimates were also found to be unreliable due to prevalent sources of false positive methyl-PSMs that produce high peptide identity score distributions. Incorrect methylation site localizations, peptides containing cysteinyl-S-β-propionamide, and methylated glutamic or aspartic acid residues can partially, but not wholly, account for these false positive methyl-PSMs. Together, these results indicate that the target-decoy approach is an unreliable means of estimating methyl-PSM FDRs and methyl-PSM filtering criteria. We suggest that orthogonal methylpeptide validation (e.g. heavy-methyl SILAC or its offshoots) should be considered a prerequisite for obtaining high confidence methyl-PSMs in large scale LC-MS/MS methylation site discovery experiments and make recommendations on how to reduce methyl-PSM FDRs in samples not amenable to heavy isotope labeling. Data are available via ProteomeXchange with the data identifier PXD002857. PMID:26699799
Gu, Wen; Reddy, Hima B; Green, Debbie; Belfi, Brian; Einzig, Shanah
2017-01-01
Criminal forensic evaluations are complicated by the risk that examinees will respond in an unreliable manner. Unreliable responding could occur due to lack of personal investment in the evaluation, severe mental illness, and low cognitive abilities. In this study, 31% of Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) profiles were invalid due to random or fixed-responding (T score ≥ 80 on the VRIN-r or TRIN-r scales) in a sample of pretrial criminal defendants evaluated in the context of treatment for competency restoration. Hierarchical regression models showed that symptom exaggeration variables, as measured by inconsistently reported psychiatric symptoms, contributed over and above education and intellectual functioning in their prediction of both random responding and fixed responding. Psychopathology variables, as measured by mood disturbance, better predicted fixed responding after controlling for estimates of cognitive abilities, but did not improve the prediction for random responding. These findings suggest that random responding and fixed responding are not only affected by education and intellectual functioning, but also by intentional exaggeration and aspects of psychopathology. Measures of intellectual functioning and effort and response style should be considered for administration in conjunction with self-report personality measures to rule out rival hypotheses of invalid profiles.
When is Information Sufficient for Action Search with Unreliable Yet Informative Intelligence
2016-03-30
information: http://pubsonline.informs.org When Is Information Sufficient for Action? Search with Unreliable yet Informative Intelligence Michael Atkinson... Search with Unreliable yet Informative Intelligence. Operations Research Published online in Articles in Advance 30 Mar 2016 . http://dx.doi.org/10.1287...print) � ISSN 1526-5463 (online) http://dx.doi.org/10.1287/opre.2016.1488 © 2016 INFORMS When Is Information Sufficient for Action? Search with
Population structure and genetic diversity in North American Hedysarum boreale Nutt.
Bradley S. Bushman; Steven R. Larson; Michael D. Peel; Michael E. Pfrender
2007-01-01
Hedysarum boreale Nutt. is a perennial legume native to western North America, with robust foliage in the late spring season. Due to its wide native range, forage value, and N2 fixation, H. boreale is of interest for rangeland revegetation and production. Seed cost is a major obstacle for utilization of H. boreale, primarily due to seed shattering and unreliable seed...
John, Andrew B; Kreisman, Brian M
2017-09-01
Extended high-frequency (EHF) audiometry is useful for evaluating ototoxic exposures and may relate to speech recognition, localisation and hearing aid benefit. There is a need to determine whether common clinical practice for EHF audiometry using tone and noise stimuli is reliable. We evaluated equivalence and compared test-retest (TRT) reproducibility for audiometric thresholds obtained using pure tones and narrowband noise (NBN) from 0.25 to 16 kHz. Thresholds and test-retest reproducibility for stimuli in the conventional (0.25-6 kHz) and EHF (8-16 kHz) frequency ranges were compared in a repeated-measures design. A total of 70 ears of adults with normal hearing. Thresholds obtained using NBN were significantly lower than thresholds obtained using pure tones from 0.5 to 16 kHz, but not 0.25 kHz. Good TRT reproducibility (within 2 dB) was observed for both stimuli at all frequencies. Responses at the lower limit of the presentation range for NBN centred at 14 and 16 kHz suggest unreliability for NBN as a threshold stimulus at these frequencies. Thresholds in the conventional and EHF ranges showed good test-retest reproducibility, but differed between stimulus types. Care should be taken when comparing pure-tone thresholds with NBN thresholds especially at these frequencies.
ERIC Educational Resources Information Center
Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed
2013-01-01
Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…
Haas, Chloé; Rossi, Sophie; Meier, Roman; Ryser-Degiorgis, Marie-Pierre
2015-07-01
Sarcoptic mange occurs in free-ranging wild boar (Sus scrofa) but has been poorly described in this species. We evaluated the performance of a commercial indirect enzyme-linked immunosorbent assay (ELISA) for serodiagnosis of sarcoptic mange in domestic swine when applied to wild boar sera. We tested 96 sera from wild boar in populations without mange history ("truly noninfected") collected in Switzerland between December 2012 and February 2014, and 141 sera from free-ranging wild boar presenting mange-like lesions, including 50 live animals captured and sampled multiple times in France between May and August 2006 and three cases submitted to necropsy in Switzerland between April 2010 and February 2014. Mite infestation was confirmed by skin scraping in 20 of them ("truly infected"). We defined sensitivity of the test as the proportion of truly infected that were found ELISA-positive, and specificity as the proportion of truly noninfected that were found negative. Sensitivity and specificity were 75% and 80%, respectively. Success of antibody detection increased with the chronicity of lesions, and seroconversion was documented in 19 of 27 wild boar sampled multiple times that were initially negative or doubtful. In conclusion, the evaluated ELISA has been successfully applied to wild boar sera. It appears to be unreliable for early detection in individual animals but may represent a useful tool for population surveys.
Use of multivariate statistics to identify unreliable data obtained using CASA.
Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón
2013-06-01
In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.
Dipstick measurements of urine specific gravity are unreliable.
de Buys Roessingh, A S; Drukker, A; Guignard, J P
2001-08-01
To evaluate the reliability of dipstick measurements of urine specific gravity (U-SG). Fresh urine specimens were tested for urine pH and osmolality (U-pH, U-Osm) by a pH meter and an osmometer, and for U-SG by three different methods (refractometry, automatic readout of a dipstick (Clinitek-50), and (visual) change of colour of the dipstick). The correlations between the visual U-SG dipstick measurements and U-SG determined by a refractometer and the comparison of Clinitek((R))-50 dipstick U-SG measurements with U-Osm were less than optimal, showing very wide scatter of values. Only the U-SG refractometer values and U-Osm had a good linear correlation. The tested dipstick was unreliable for the bedside determination of U-SG, even after correction for U-pH, as recommended by the manufacturer. Among the bedside determinations, only refractometry gives reliable U-SG results. Dipstick U-SG measurements should be abandoned.
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Rosenbaum, Janet E
2009-06-01
Surveys are the primary information source about adolescents' health risk behaviors, but adolescents may not report their behaviors accurately. Survey data are used for formulating adolescent health policy, and inaccurate data can cause mistakes in policy creation and evaluation. The author used test-retest data from the Youth Risk Behavior Survey (United States, 2000) to compare adolescents' responses to 72 questions about their risk behaviors at a 2-week interval. Each question was evaluated for prevalence change and 3 measures of unreliability: inconsistency (retraction and apparent initiation), agreement measured as tetrachoric correlation, and estimated error due to inconsistency assessed with a Bayesian method. Results showed that adolescents report their sex, drug, alcohol, and tobacco histories more consistently than other risk behaviors in a 2-week period, opposite their tendency over longer intervals. Compared with other Youth Risk Behavior Survey topics, most sex, drug, alcohol, and tobacco items had stable prevalence estimates, higher average agreement, and lower estimated measurement error. Adolescents reported their weight control behaviors more unreliably than other behaviors, particularly problematic because of the increased investment in adolescent obesity research and reliance on annual surveys for surveillance and policy evaluation. Most weight control items had unstable prevalence estimates, lower average agreement, and greater estimated measurement error than other topics.
Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-01-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267
Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-05-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.
Social control of unreliable signals of strength in male but not female crayfish, Cherax destructor.
Walter, Gregory M; van Uitregt, Vincent O; Wilson, Robbie S
2011-10-01
The maintenance of unreliable signals within animal populations remains a highly controversial subject in studies of animal communication. Crustaceans are an ideal group for studying unreliable signals of strength because their chela muscles are cryptically concealed beneath an exoskeleton, making it difficult for competitors to visually assess an opponent's strength. In this study, we examined the importance of social avenues for mediating the possible advantages gained by unreliable signals of strength in crustaceans. To do this, we investigated the factors that determine social dominance and the relative importance of signalling and fighting during aggressive encounters in male and female freshwater crayfish, Cherax destructor. Like other species of crayfish, we expected substantial variation in weapon force for a given weapon size, making the assessment of actual fighting ability of an opponent difficult from signalling alone. In addition, we expected fighting would be used to ensure that individuals that are weak for their signal (i.e. chela) size would not achieve higher than expected dominance. For both male and female C. destructor, we found large variation in the actual force of their chela for any given weapon size, indicating that it is difficult for competitors to accurately assess an opponent's force on signal size alone. For males, these unreliable signals of strength were controlled socially through increased levels of fighting and a decreased reliance on signalling, thus directly limiting the benefits accrued to individuals employing high-quality signals (large chelae) with only low resource holding potential. However, in contrast to our predictions, we found that females primarily relied on signalling to settle disputes, resulting in unreliable signals of strength being routinely used to establish dominance. The reliance by females on unreliable signals to determine dominance highlights our poor current understanding of the prevalence and distribution of dishonesty in animal communication.
[Features of control of electromagnetic radiation emitted by personal computers].
Pal'tsev, Iu P; Buzov, A L; Kol'chugin, Iu I
1996-01-01
Measurements of PC electromagnetic irradiation show that the main sources are PC blocks emitting the waves of certain frequencies. Use of wide-range detectors measuring field intensity in assessment of PC electromagnetic irradiation gives unreliable results. More precise measurements by selective devices are required. Thus, it is expedient to introduce a term "spectral density of field intensity" and its maximal allowable level. In this case a frequency spectrum of PC electromagnetic irradiation is divided into 4 ranges, one of which is subjected to calculation of field intensity for each harmonic frequency, and others undergo assessment of spectral density of field intensity.
ERIC Educational Resources Information Center
McGill, D. A.; van der Vleuten, C. P. M.; Clarke, M. J.
2011-01-01
Even though rater-based judgements of clinical competence are widely used, they are context sensitive and vary between individuals and institutions. To deal adequately with rater-judgement unreliability, evaluating the reliability of workplace rater-based assessments in the local context is essential. Using such an approach, the primary intention…
Friston, Karl J.; Bastos, André M.; Oswal, Ashwini; van Wijk, Bernadette; Richter, Craig; Litvak, Vladimir
2014-01-01
This technical paper offers a critical re-evaluation of (spectral) Granger causality measures in the analysis of biological timeseries. Using realistic (neural mass) models of coupled neuronal dynamics, we evaluate the robustness of parametric and nonparametric Granger causality. Starting from a broad class of generative (state-space) models of neuronal dynamics, we show how their Volterra kernels prescribe the second-order statistics of their response to random fluctuations; characterised in terms of cross-spectral density, cross-covariance, autoregressive coefficients and directed transfer functions. These quantities in turn specify Granger causality — providing a direct (analytic) link between the parameters of a generative model and the expected Granger causality. We use this link to show that Granger causality measures based upon autoregressive models can become unreliable when the underlying dynamics is dominated by slow (unstable) modes — as quantified by the principal Lyapunov exponent. However, nonparametric measures based on causal spectral factors are robust to dynamical instability. We then demonstrate how both parametric and nonparametric spectral causality measures can become unreliable in the presence of measurement noise. Finally, we show that this problem can be finessed by deriving spectral causality measures from Volterra kernels, estimated using dynamic causal modelling. PMID:25003817
Direct costs of unintended pregnancy in the Russian federation.
Lowin, Julia; Jarrett, James; Dimova, Maria; Ignateva, Victoria; Omelyanovsky, Vitaly; Filonenko, Anna
2015-02-01
In 2010, almost every third pregnancy in Russia was terminated, indicating that unintended pregnancy (UP) is a public health problem. The aim of this study was to estimate the direct cost of UP to the healthcare system in Russia and the proportion attributable to using unreliable contraception. A cost model was built, adopting a generic payer perspective with a 1-year time horizon. The analysis cohort was defined as women of childbearing age between 18 and 44 years actively seeking to avoid pregnancy. Model inputs were derived from published sources or government statistics with a 2012 cost base. To estimate the number of UPs attributable to unreliable methods, the model combined annual typical use failure rates and age-adjusted utilization for each contraceptive method. Published survey data was used to adjust the total cost of UP by the number of UPs that were mistimed rather than unwanted. Scenario analysis considered alternate allocation of methods to the reliable and unreliable categories and estimate of the burden of UP in the target sub-group of women aged 18-29 years. The model estimated 1,646,799 UPs in the analysis cohort (women aged 18-44 years) with an associated annual cost of US$783 million. The model estimated 1,019,371 UPs in the target group of 18-29 years, of which 88 % were attributable to unreliable contraception. The total cost of UPs in the target group was estimated at approximately US$498 million, of which US$441 million could be considered attributable to the use of unreliable methods. The cost of UP attributable to use of unreliable contraception in Russia is substantial. Policies encouraging use of reliable contraceptive methods could reduce the burden of UP.
Extrinsic and Intrinsic Motivation at 30: Unresolved Scientific Issues
ERIC Educational Resources Information Center
Reiss, Steven
2005-01-01
The undermining effect of extrinsic reward on intrinsic motivation remains unproven. The key unresolved issues are construct invalidity (all four definitions are unproved and two are illogical); measurement unreliability (the free-choice measure requires unreliable, subjective judgments to infer intrinsic motivation); inadequate experimental…
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
Evaluation of the Biolog automated microbial identification system
NASA Technical Reports Server (NTRS)
Klingler, J. M.; Stowe, R. P.; Obenhuber, D. C.; Groves, T. O.; Mishra, S. K.; Pierson, D. L.
1992-01-01
Biolog's identification system was used to identify 39 American Type Culture Collection reference taxa and 45 gram-negative isolates from water samples. Of the reference strains, 98% were identified to genus level and 76% to species level within 4 to 24 h. Identification of some authentic strains of Enterobacter, Klebsiella, and Serratia was unreliable. A total of 93% of the water isolates were identified.
ERIC Educational Resources Information Center
Usta, Mehmet Emin
2018-01-01
From the very early periods of accepting management as a science, main goal of inspection is seen as a control mechanism. Classical management approach evaluated staff as people who need strict inspections by experts to better perform since they are perceived as unreliable and irresponsible on their duties. Later on, expertise areas related to…
Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S
2011-01-01
The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.
Dipstick measurements of urine specific gravity are unreliable
Roessingh, A; Drukker, A; Guignard, J
2001-01-01
AIM—To evaluate the reliability of dipstick measurements of urine specific gravity (U-SG). METHODS—Fresh urine specimens were tested for urine pH and osmolality (U-pH, U-Osm) by a pH meter and an osmometer, and for U-SG by three different methods (refractometry, automatic readout of a dipstick (Clinitek-50), and (visual) change of colour of the dipstick). RESULTS—The correlations between the visual U-SG dipstick measurements and U-SG determined by a refractometer and the comparison of Clinitek®-50 dipstick U-SG measurements with U-Osm were less than optimal, showing very wide scatter of values. Only the U-SG refractometer values and U-Osm had a good linear correlation. The tested dipstick was unreliable for the bedside determination of U-SG, even after correction for U-pH, as recommended by the manufacturer. CONCLUSIONS—Among the bedside determinations, only refractometry gives reliable U-SG results. Dipstick U-SG measurements should be abandoned. PMID:11466191
Misleading prioritizations from modelling range shifts under climate change
Sofaer, Helen R.; Jarnevich, Catherine S.; Flather, Curtis H.
2018-01-01
AimConservation planning requires the prioritization of a subset of taxa and geographical locations to focus monitoring and management efforts. Integration of the threats and opportunities posed by climate change often relies on predictions from species distribution models, particularly for assessments of vulnerability or invasion risk for multiple taxa. We evaluated whether species distribution models could reliably rank changes in species range size under climate and land use change.LocationConterminous U.S.A.Time period1977–2014.Major taxa studiedPasserine birds.MethodsWe estimated ensembles of species distribution models based on historical North American Breeding Bird Survey occurrences for 190 songbirds, and generated predictions to recent years given c. 35 years of observed land use and climate change. We evaluated model predictions using standard metrics of discrimination performance and a more detailed assessment of the ability of models to rank species vulnerability to climate change based on predicted range loss, range gain, and overall change in range size.ResultsSpecies distribution models yielded unreliable and misleading assessments of relative vulnerability to climate and land use change. Models could not accurately predict range expansion or contraction, and therefore failed to anticipate patterns of range change among species. These failures occurred despite excellent overall discrimination ability and transferability to the validation time period, which reflected strong performance at the majority of locations that were either always or never occupied by each species.Main conclusionsModels failed for the questions and at the locations of greatest interest to conservation and management. This highlights potential pitfalls of multi-taxa impact assessments under global change; in our case, models provided misleading rankings of the most impacted species, and spatial information about range changes was not credible. As modelling methods and frameworks continue to be refined, performance assessments and validation efforts should focus on the measures of risk and vulnerability useful for decision-making.
How Do Households Respond to Unreliable Water Supplies? A Systematic Review.
Majuru, Batsirai; Suhrcke, Marc; Hunter, Paul R
2016-12-09
Although the Millennium Development Goal (MDG) target for drinking water was met, in many developing countries water supplies are unreliable. This paper reviews how households in developing countries cope with unreliable water supplies, including coping costs, the distribution of coping costs across socio-economic groups, and effectiveness of coping strategies in meeting household water needs. Structured searches were conducted in peer-reviewed and grey literature in electronic databases and search engines, and 28 studies were selected for review, out of 1643 potentially relevant references. Studies were included if they reported on strategies to cope with unreliable household water supplies and were based on empirical research in developing countries. Common coping strategies include drilling wells, storing water, and collecting water from alternative sources. The choice of coping strategies is influenced by income, level of education, land tenure and extent of unreliability. The findings of this review highlight that low-income households bear a disproportionate coping burden, as they often engage in coping strategies such as collecting water from alternative sources, which is labour and time-intensive, and yields smaller quantities of water. Such alternative sources may be of lower water quality, and pose health risks. In the absence of dramatic improvements in the reliability of water supplies, a point of critical avenue of enquiry should be what coping strategies are effective and can be readily adopted by low income households.
False Beliefs in Unreliable Knowledge Networks
NASA Astrophysics Data System (ADS)
Ioannidis, Evangelos; Varsakelis, Nikos; Antoniou, Ioannis
2017-03-01
The aims of this work are: (1) to extend knowledge dynamics analysis in order to assess the influence of false beliefs and unreliable communication channels, (2) to investigate the impact of selection rule-policy for knowledge acquisition, (3) to investigate the impact of targeted link attacks ("breaks" or "infections") of certain "healthy" communication channels. We examine the knowledge dynamics analytically, as well as by simulations on both artificial and real organizational knowledge networks. The main findings are: (1) False beliefs have no significant influence on knowledge dynamics, while unreliable communication channels result in non-monotonic knowledge updates ("wild" knowledge fluctuations may appear) and in significant elongation of knowledge attainment. Moreover, false beliefs may emerge during knowledge evolution, due to the presence of unreliable communication channels, even if they were not present initially, (2) Changing the selection rule-policy, by raising the awareness of agents to avoid the selection of unreliable communication channels, results in monotonic knowledge upgrade and in faster knowledge attainment, (3) "Infecting" links is more harmful than "breaking" links, due to "wild" knowledge fluctuations and due to the elongation of knowledge attainment. Moreover, attacking even a "small" percentage of links (≤5%) with high knowledge transfer, may result in dramatic elongation of knowledge attainment (over 100%), as well as in delays of the onset of knowledge attainment. Hence, links of high knowledge transfer should be protected, because in Information Warfare and Disinformation, these links are the "best targets".
How Do Households Respond to Unreliable Water Supplies? A Systematic Review
Majuru, Batsirai; Suhrcke, Marc; Hunter, Paul R.
2016-01-01
Although the Millennium Development Goal (MDG) target for drinking water was met, in many developing countries water supplies are unreliable. This paper reviews how households in developing countries cope with unreliable water supplies, including coping costs, the distribution of coping costs across socio-economic groups, and effectiveness of coping strategies in meeting household water needs. Structured searches were conducted in peer-reviewed and grey literature in electronic databases and search engines, and 28 studies were selected for review, out of 1643 potentially relevant references. Studies were included if they reported on strategies to cope with unreliable household water supplies and were based on empirical research in developing countries. Common coping strategies include drilling wells, storing water, and collecting water from alternative sources. The choice of coping strategies is influenced by income, level of education, land tenure and extent of unreliability. The findings of this review highlight that low-income households bear a disproportionate coping burden, as they often engage in coping strategies such as collecting water from alternative sources, which is labour and time-intensive, and yields smaller quantities of water. Such alternative sources may be of lower water quality, and pose health risks. In the absence of dramatic improvements in the reliability of water supplies, a point of critical avenue of enquiry should be what coping strategies are effective and can be readily adopted by low income households. PMID:27941695
Application of Gaussian Process Modeling to Analysis of Functional Unreliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Youngblood
2014-06-01
This paper applies Gaussian Process (GP) modeling to analysis of the functional unreliability of a “passive system.” GPs have been used widely in many ways [1]. The present application uses a GP for emulation of a system simulation code. Such an emulator can be applied in several distinct ways, discussed below. All applications illustrated in this paper have precedents in the literature; the present paper is an application of GP technology to a problem that was originally analyzed [2] using neural networks (NN), and later [3, 4] by a method called “Alternating Conditional Expectations” (ACE). This exercise enables a multifacetedmore » comparison of both the processes and the results. Given knowledge of the range of possible values of key system variables, one could, in principle, quantify functional unreliability by sampling from their joint probability distribution, and performing a system simulation for each sample to determine whether the function succeeded for that particular setting of the variables. Using previously available system simulation codes, such an approach is generally impractical for a plant-scale problem. It has long been recognized, however, that a well-trained code emulator or surrogate could be used in a sampling process to quantify certain performance metrics, even for plant-scale problems. “Response surfaces” were used for this many years ago. But response surfaces are at their best for smoothly varying functions; in regions of parameter space where key system performance metrics may behave in complex ways, or even exhibit discontinuities, response surfaces are not the best available tool. This consideration was one of several that drove the work in [2]. In the present paper, (1) the original quantification of functional unreliability using NN [2], and later ACE [3], is reprised using GP; (2) additional information provided by the GP about uncertainty in the limit surface, generally unavailable in other representations, is discussed; (3) a simple forensic exercise is performed, analogous to the inverse problem of code calibration, but with an accident management spin: given an observation about containment pressure, what can we say about the system variables? References 1. For an introduction to GPs, see (for example) Gaussian Processes for Machine Learning, C. E. Rasmussen and C. K. I. Williams (MIT, 2006). 2. Reliability Quantification of Advanced Reactor Passive Safety Systems, J. J. Vandenkieboom, PhD Thesis (University of Michigan, 1996). 3. Z. Cui, J. C. Lee, J. J. Vandenkieboom, and R. W. Youngblood, “Unreliability Quantification of a Containment Cooling System through ACE and ANN Algorithms,” Trans. Am. Nucl. Soc. 85, 178 (2001). 4. Risk and Safety Analysis of Nuclear Systems, J. C. Lee and N. J. McCormick (Wiley, 2011). See especially §11.2.4.« less
Project CHECO Southeast Asia Report. Pave Mace/Combat Rendezvous
1972-12-26
deficiencies in the APQ-133 radar, the LWL was generally satisfied with the results and.certified the system for use in combat provided all airborne...offset 5 35 firing confirmed a deficiency noted in earlier test results: the AWG-13 FCC was unreliable at long ranges. On 2 May the 14th SOW informed...Lockbourne, SEA, and Puerto 3 Rico tests of 1969-1970. These tests had shown a number of deficiencies in the SST-201X miniponder, the AWG-13 Fire Control
NASA Astrophysics Data System (ADS)
Baksi, Ajoy K.
2018-04-01
40Ar/39Ar step heating analyses were carried out on seven rocks (five basalts, an andesite and a rhyolite) from the southern Paraná Province ( 28°S-30°S); they yield plateau/isochron ages of 135-134 Ma, in good agreement with published step heating data on rocks from the same area. Critical review of laser spot isochron ages for rocks from the Province, ranging from 140 to 130 Ma, are shown to be unreliable estimates of crystallization ages, as the rocks were substantially altered; step heating results on three of these rocks thought to yield good plateau ages, are shown to be incorrect, as a result of a technicality in dating procedures followed. U-Pb ages on zircon and baddeleyite separated from a variety of rock types ( 30°S-23°S) fall in the range 135 to 134 Ma. All reliable 40Ar/39Ar and U-Pb ages indicate volcanism was sharply focused, initiated at 135 Ma, and 1 Myr in duration; no variation of age with either latitude or longitude is noted, Scrutiny of published 40Ar/39Ar ages on the Florianopolis dykes shows they cannot be used as reliable crystallization ages. U-Pb work shows that this dyke swarm was formed coevally with the main part of the Parana province. Most of the published 40Ar/39Ar ages on the Ponta Grossa dyke swarm are unreliable; a few ages appear reliable and suggest the magmatic event in this area, may have postdated the main Paraná pulse by 1-2 Myr. A single 40Ar/39Ar age from a high-Nb basalt in the southernmost part ( 34°S) of the Paraná at 135 Ma, highlights the need for further radiometric work on other areas of this flood basalt province. The Paraná Province postdates the time of the Jurassic-Cretaceous boundary by 10 Myr.
Evaluation readiness: improved evaluation planning using a data inventory framework.
Cohen, A B; Hall, K C; Cohodes, D R
1985-01-01
Factors intrinsic to many programs, such as ambiguously stated objectives, inadequately defined performance measures, and incomplete or unreliable databases, often conspire to limit the evaluability of these programs. Current evaluation planning approaches are somewhat constrained in their ability to overcome these obstacles and to achieve full preparedness for evaluation. In this paper, the concept of evaluation readiness is introduced as a complement to other evaluation planning approaches, most notably that of evaluability assessment. The basic products of evaluation readiness--the formal program definition and the data inventory framework--are described, along with a guide for assuring more timely and appropriate evaluation response capability to support the decision making needs of program managers. The utility of evaluation readiness for program planning, as well as for effective management, is also discussed.
The Effects of Source Unreliability on Prior and Future Word Learning
ERIC Educational Resources Information Center
Faught, Gayle G.; Leslie, Alicia D.; Scofield, Jason
2015-01-01
Young children regularly learn words from interactions with other speakers, though not all speakers are reliable informants. Interestingly, children will reverse to trusting a reliable speaker when a previously endorsed speaker proves unreliable. When later asked to identify the referent of a novel word, children who reverse trust are less willing…
Evaluation of modified boehm titration methods for use with biochars.
Fidel, Rivka B; Laird, David A; Thompson, Michael L
2013-11-01
The Boehm titration, originally developed to quantify organic functional groups of carbon blacks and activated carbons in discrete pK ranges, has received growing attention for analyzing biochar. However, properties that distinguish biochar from carbon black and activated carbon, including greater carbon solubility and higher ash content, may render the original Boehm titration method unreliable for use with biochars. Here we use seven biochars and one reference carbon black to evaluate three Boehm titration methods that use (i) acidification followed by sparging (sparge method), (ii) centrifugation after treatment with BaCl (barium method), and (iii) a solid-phase extraction cartridge followed by acidification and sparging (cartridge method) to remove carbonates and dissolved organic compounds (DOC) from the Boehm extracts before titration. Our results for the various combinations of Boehm reactants and methods indicate that no one method was free of bias for all three Boehm reactants and that the cartridge method showed evidence of bias for all pK ranges. By process of elimination, we found that a combination of the sparge method for quantifying functional groups in the lowest pK range (∼5 to 6.4), and the barium method for quantifying functional groups in the higher pK ranges (∼6.4 to 10.3 and ∼10.3 to 13) to be free of evidence for bias. We caution, however, that further testing is needed and that all Boehm titration results for biochars should be considered suspect unless efforts were undertaken to remove ash and prevent interference from DOC. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Method for removal of random noise in eddy-current testing system
Levy, Arthur J.
1995-01-01
Eddy-current response voltages, generated during inspection of metallic structures for anomalies, are often replete with noise. Therefore, analysis of the inspection data and results is difficult or near impossible, resulting in inconsistent or unreliable evaluation of the structure. This invention processes the eddy-current response voltage, removing the effect of random noise, to allow proper identification of anomalies within and associated with the structure.
NASA Astrophysics Data System (ADS)
Bell, Peter M.
In a long-awaited report (‘Assessment of Technologies for Determining Cancer Risks From the Environment’), the U.S. Office of Technology Assessment (OTA) has evaluated the role of environmental factors in cancer diseases. Environment is interpreted broadly as encompassing anything that interacts with humans, including the natural environment, food, radiation, the workplace, etc. Geologic factors range from geographic location to radiation and specific minerals. The report, however, is based on an inadequate data base in most instances, and its major recommendations are related to the establishment of a national cancer registry to record cancer statistics, as is done for many other diseases. Presently, hard statistics are lacking in the establishment of some association between the cause-effect relationship of most environmental factors and most carcinogens. Of particular interest, but unfortunately based on unreliable data, are the effects of mineral substances such as ‘asbestos.’ USGS mineralogist Malcolm Ross will review asbestos and its effects on human health in the forthcoming Mineralogical Society of America's Short Course on the Amphiboles (Reviews in Mineralogy, 9, in press, 1981).
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
Xu, Guoying; Chen, Wei; Deng, Shiming; Zhang, Xiaosong; Zhao, Sainan
2015-01-01
Application of solar collectors for hot water supply, space heating, and cooling plays a significant role in reducing building energy consumption. For conventional solar collectors, solar radiation is absorbed by spectral selective coating on the collectors’ tube/plate wall. The poor durability of the coating can lead to an increased manufacturing cost and unreliability for a solar collector operated at a higher temperature. Therefore, a novel nanofluid-based direct absorption solar collector (NDASC) employing uncoated collector tubes has been proposed, and its operating characteristics for medium-temperature solar collection were theoretically and experimentally studied in this paper. CuO/oil nanofluid was prepared and used as working fluid of the NDASC. The heat-transfer mechanism of the NDASC with parabolic trough concentrator was theoretically evaluated and compared with a conventional indirect absorption solar collector (IASC). The theoretical analysis results suggested that the fluid’s temperature distribution in the NDASC was much more uniform than that in the IASC, and an enhanced collection efficiency could be achieved for the NDASC operated within a preferred working temperature range. To demonstrate the feasibility of the proposed NDASC, experimental performances of an NDASC and an IASC with the same parabolic trough concentrator were furthermore evaluated and comparatively discussed. PMID:28347112
Xu, Guoying; Chen, Wei; Deng, Shiming; Zhang, Xiaosong; Zhao, Sainan
2015-12-04
Application of solar collectors for hot water supply, space heating, and cooling plays a significant role in reducing building energy consumption. For conventional solar collectors, solar radiation is absorbed by spectral selective coating on the collectors' tube/plate wall. The poor durability of the coating can lead to an increased manufacturing cost and unreliability for a solar collector operated at a higher temperature. Therefore, a novel nanofluid-based direct absorption solar collector (NDASC) employing uncoated collector tubes has been proposed, and its operating characteristics for medium-temperature solar collection were theoretically and experimentally studied in this paper. CuO/oil nanofluid was prepared and used as working fluid of the NDASC. The heat-transfer mechanism of the NDASC with parabolic trough concentrator was theoretically evaluated and compared with a conventional indirect absorption solar collector (IASC). The theoretical analysis results suggested that the fluid's temperature distribution in the NDASC was much more uniform than that in the IASC, and an enhanced collection efficiency could be achieved for the NDASC operated within a preferred working temperature range. To demonstrate the feasibility of the proposed NDASC, experimental performances of an NDASC and an IASC with the same parabolic trough concentrator were furthermore evaluated and comparatively discussed.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
A high performance totally ordered multicast protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Whetten, Brian; Kaplan, Simon
1995-01-01
This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.
ERIC Educational Resources Information Center
Steiner, Peter M.; Cook, Thomas D.; Shadish, William R.
2011-01-01
The effect of unreliability of measurement on propensity score (PS) adjusted treatment effects has not been previously studied. The authors report on a study simulating different degrees of unreliability in the multiple covariates that were used to estimate the PS. The simulation uses the same data as two prior studies. Shadish, Clark, and Steiner…
Navarro, Jordan; Yousfi, Elsa; Deniel, Jonathan; Jallais, Christophe; Bueno, Mercedes; Fort, Alexandra
2016-12-01
In the past, lane departure warnings (LDWs) were demonstrated to improve driving behaviours during lane departures but little is known about the effects of unreliable warnings. This experiment focused on the influence of false warnings alone or in combination with missed warnings and warning onset on assistance effectiveness and acceptance. Two assistance unreliability levels (33 and 17%) and two warning onsets (partial and full lane departure) were manipulated in order to investigate interaction. Results showed that assistance, regardless unreliability levels and warning onsets, improved driving behaviours during lane departure episodes and outside of these episodes by favouring better lane-keeping performances. Full lane departure and highly unreliable warnings, however, reduced assistance efficiency. Drivers' assistance acceptance was better for the most reliable warnings and for the subsequent warnings. The data indicate that imperfect LDWs (false warnings or false and missed warnings) further improve driving behaviours compared to no assistance. Practitioner Summary: This study revealed that imperfect lane departure warnings are able to significantly improve driving performances and that warning onset is a key element for assistance effectiveness and acceptance. The conclusion may be of particular interest for lane departure warning designers.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sperling, Joshua; Fisher, Stephen; Reiner, Mark B.
The term 'leapfrogging' has been applied to cities and nations that have adopted a new form of infrastructure by bypassing the traditional progression of development, e.g., from no phones to cell phones - bypassing landlines all together. However, leapfrogging from unreliable infrastructure systems to 'smart' cities is too large a jump resulting in unsustainable and unhealthy infrastructure systems. In the Global South, a baseline of unreliable infrastructure is a prevalent problem. The push for sustainable and 'smart' [re]development tends to ignore many of those already living with failing, unreliable infrastructure. Without awareness of baseline conditions, uninformed projects run the riskmore » of returning conditions to the status quo, keeping many urban populations below targets of the United Nations' Sustainable Development Goals. A key part of understanding the baseline is to identify how citizens have long learned to adjust their expectations of basic services. To compensate for poor infrastructure, most residents in the Global South invest in remedial secondary infrastructure (RSI) at the household and business levels. The authors explore three key 'smart' city transformations that address RSI within a hierarchical planning pyramid known as the comprehensive resilient and reliable infrastructure systems (CRISP) planning framework.« less
Synaptic unreliability facilitates information transmission in balanced cortical populations
NASA Astrophysics Data System (ADS)
Gatys, Leon A.; Ecker, Alexander S.; Tchumatchenko, Tatjana; Bethge, Matthias
2015-06-01
Synaptic unreliability is one of the major sources of biophysical noise in the brain. In the context of neural information processing, it is a central question how neural systems can afford this unreliability. Here we examine how synaptic noise affects signal transmission in cortical circuits, where excitation and inhibition are thought to be tightly balanced. Surprisingly, we find that in this balanced state synaptic response variability actually facilitates information transmission, rather than impairing it. In particular, the transmission of fast-varying signals benefits from synaptic noise, as it instantaneously increases the amount of information shared between presynaptic signal and postsynaptic current. Furthermore we show that the beneficial effect of noise is based on a very general mechanism which contrary to stochastic resonance does not reach an optimum at a finite noise level.
High-Rydberg Xenon Submillimeter-Wave Detector
NASA Technical Reports Server (NTRS)
Chutjian, Ara
1987-01-01
Proposed detector for infrared and submillimeter-wavelength radiation uses excited xenon atoms as Rydberg sensors instead of customary beams of sodium, potassium, or cesium. Chemically inert xenon easily stored in pressurized containers, whereas beams of dangerously reactive alkali metals must be generated in cumbersome, unreliable ovens. Xenon-based detector potential for infrared astronomy and for Earth-orbiter detection of terrestrial radiation sources. Xenon atoms excited to high energy states in two stages. Doubly excited atoms sensitive to photons in submillimeter wavelength range, further excited by these photons, then ionized and counted.
Paixão, Paulo; Gouveia, Luís F; Silva, Nuno; Morais, José A G
2017-03-01
A simulation study is presented, evaluating the performance of the f 2 , the model-independent multivariate statistical distance and the f 2 bootstrap methods in the ability to conclude similarity between two dissolution profiles. Different dissolution profiles, based on the Noyes-Whitney equation and ranging from theoretical f 2 values between 100 and 40, were simulated. Variability was introduced in the dissolution model parameters in an increasing order, ranging from a situation complying with the European guidelines requirements for the use of the f 2 metric to several situations where the f 2 metric could not be used anymore. Results have shown that the f 2 is an acceptable metric when used according to the regulatory requirements, but loses its applicability when variability increases. The multivariate statistical distance presented contradictory results in several of the simulation scenarios, which makes it an unreliable metric for dissolution profile comparisons. The bootstrap f 2 , although conservative in its conclusions is an alternative suitable method. Overall, as variability increases, all of the discussed methods reveal problems that can only be solved by increasing the number of dosage form units used in the comparison, which is usually not practical or feasible. Additionally, experimental corrective measures may be undertaken in order to reduce the overall variability, particularly when it is shown that it is mainly due to the dissolution assessment instead of being intrinsic to the dosage form. Copyright © 2016. Published by Elsevier B.V.
Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth
2017-10-01
hospital health care, the benefit of high -resolution medical data is greatly limited in battlefield or natural disaster areas, where communication to...sampling rate. For high - frequency data like waveforms, the downsampling approach could directly reduce the amount of data. Therefore, it could be used...AFRL-SA-WP-TR-2017-0019 Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth Peter F
Estimating Economic and Logistic Utility of Connecting to Unreliable Power Grids
2016-06-17
the most unreliable host nation grids almost always have a higher availability than solar photovoltaics ( PV ), which for most parts of the world will...like solar , and still design a facility energy architecture that benefits from that source when available. Index Terms—facilities management, energy...Maintenance PV Photovoltaic SAIDI System Average Interruption Duration Index SAIFI System Average Interruption Frequency Index SHP Simplified Host
A methodology for spectral wave model evaluation
NASA Astrophysics Data System (ADS)
Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.
2017-12-01
Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave climate, omitting the energy in the frequency band between the two lower limits tested can lead to an incomplete characterization of model performance. This methodology was developed to aid in selecting a comparison frequency range that does not needlessly increase computational expense and does not exclude energy to the detriment of model performance analysis.
ERIC Educational Resources Information Center
Cook, Thomas D.; Steiner, Peter M.; Pohl, Steffi
2009-01-01
This study uses within-study comparisons to assess the relative importance of covariate choice, unreliability in the measurement of these covariates, and whether regression or various forms of propensity score analysis are used to analyze the outcome data. Two of the within-study comparisons are of the four-arm type, and many more are of the…
Chen, Hui-Ya; Chang, Hsiao-Yun; Ju, Yan-Ying; Tsao, Hung-Ting
2017-06-01
Rhythmic gymnasts specialise in dynamic balance under sensory conditions of numerous somatosensory, visual, and vestibular stimulations. This study investigated whether adolescent rhythmic gymnasts are superior to peers in Sensory Organisation test (SOT) performance, which quantifies the ability to maintain standing balance in six sensory conditions, and explored whether they plateaued faster during familiarisation with the SOT. Three and six sessions of SOTs were administered to 15 female rhythmic gymnasts (15.0 ± 1.8 years) and matched peers (15.1 ± 2.1 years), respectively. The gymnasts were superior to their peers in terms of fitness measures, and their performance was better in the SOT equilibrium score when visual information was unreliable. The SOT learning effects were shown in more challenging sensory conditions between Sessions 1 and 2 and were equivalent in both groups; however, over time, the gymnasts gained marginally significant better visual ability and relied less on visual sense when unreliable. In conclusion, adolescent rhythmic gymnasts have generally the same sensory organisation ability and learning rates as their peers. However, when visual information is unreliable, they have superior sensory organisation ability and learn faster to rely less on visual sense.
Maternal Impression Management in the Assessment of Childhood Depressive Symptomatology.
Lilly, Megan; Davis, Thompson E; Castagna, Peter J; Marker, Arwen; Davis, Allison B
2018-02-27
Self-report instruments are commonly used to assess for childhood depressive symptoms. Historically, clinicians have relied heavily on parent-reports due to concerns about childrens' cognitive abilities to understand diagnostic questions. However, parents may also be unreliable reporters due to a lack of understanding of their child's symptomatology, overshadowing by their own problems, and tendencies to promote themselves more favourably in order to achieve desired assessment goals. One such variable that can lead to unreliable reporting is impression management, which is a goal-directed response in which an individual (e.g. mother or father) attempts to represent themselves, or their child, in a socially desirable way to the observer. This study examined the relationship between mothers who engage in impression management, as measured by the Parenting Stress Index-Short Form defensive responding subscale, and parent-/child-self-reports of depressive symptomatology in 106 mother-child dyads. 106 clinic-referred children (mean child age = 10.06 years, range 7-16 years) were administered the Child Depression Inventory, and mothers (mean mother age = 40.80 years, range 27-57 years) were administered the Child-Behavior Checklist, Parenting Stress Index-Short Form, and Symptom Checklist-90-Revised. As predicted, mothers who engaged in impression management under-reported their child's symptomatology on the anxious/depressed and withdrawn subscales of the Child Behavior Checklist. Moreover, the relationship between maternal-reported child depressive symptoms and child-reported depressive symptoms was moderated by impression management. These results suggest that children may be more reliable reporters of their own depressive symptomatology when mothers are highly defensive or stressed.
Conkle, Joel; Ramakrishnan, Usha; Flores-Ayala, Rafael; Suchdev, Parminder S; Martorell, Reynaldo
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016-17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements.
Stress evaluation in displacement-based 2D nonlocal finite element method
NASA Astrophysics Data System (ADS)
Pisano, Aurora Angela; Fuschi, Paolo
2018-06-01
The evaluation of the stress field within a nonlocal version of the displacement-based finite element method is addressed. With the aid of two numerical examples it is shown as some spurious oscillations of the computed nonlocal stresses arise at sections (or zones) of macroscopic inhomogeneity of the examined structures. It is also shown how the above drawback, which renders the stress numerical solution unreliable, can be viewed as the so-called locking in FEM, a subject debated in the early seventies. It is proved that a well known remedy for locking, i.e. the reduced integration technique, can be successfully applied also in the nonlocal elasticity context.
Lebel, Etienne P; Paunonen, Sampo V
2011-04-01
Implicit measures have contributed to important insights in almost every area of psychology. However, various issues and challenges remain concerning their use, one of which is their considerable variation in reliability, with many implicit measures having questionable reliability. The goal of the present investigation was to examine an overlooked consequence of this liability with respect to replication, when such implicit measures are used as dependent variables in experimental studies. Using a Monte Carlo simulation, the authors demonstrate that a higher level of unreliability in such dependent variables is associated with substantially lower levels of replicability. The results imply that this overlooked consequence can have far-reaching repercussions for the development of a cumulative science. The authors recommend the routine assessment and reporting of the reliability of implicit measures and also urge the improvement of implicit measures with low reliability.
Cost benefits of advanced software: A review of methodology used at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla N.
1993-01-01
To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.
Implications of clinical trial design on sample size requirements.
Leon, Andrew C
2008-07-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.
Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin
2017-07-10
The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.
p-Curve and p-Hacking in Observational Research.
Bruns, Stephan B; Ioannidis, John P A
2016-01-01
The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.
Blood gas analysis as a determinant of occupationally related disability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, W.K.; Zaldivar, G.L.
1990-05-01
Arterial blood gas analysis is one of the criteria used by the Department of Labor to award total and permanent disability for coal workers' pneumoconiosis (Black Lung). We have observed that Black Lung claimants often undergo several blood gas analyses with widely differing results that sometimes range from complete normality to life-threatening hypoxemia in the same subject. We concluded that blood gas analysis in occupationally related disability determination is unreliable, in that quality control and instrumentation are variable; that severe hypoxemia is rare in coal workers' pneumoconiosis; and that such hypoxemia is nonspecific and correlates poorly with breathlessness.
NASA Astrophysics Data System (ADS)
Parks, Beth
2013-03-01
Currently, the only way for homeowners to learn about the effectiveness of their home insulation is to hire an energy auditor. This difficulty deters homeowners from taking action to improve energy efficiency. In principle, measuring the temperature difference between a wall surface and the interior of a home is sufficient to determine the wall insulation, but in practice, temperature cycles from the heating system make a single measurement unreliable. I will describe a simple and inexpensive thermocouple-based device to measure this temperature difference and report results obtained by monitoring this temperature difference over multiple heating cycles in a range of buildings. Patent application 12/555371
de Beer, D A H; Nesbitt, F D; Bell, G T; Rapuleng, A
2017-04-01
The Universal Anaesthesia Machine has been developed as a complete anaesthesia workstation for use in low- and middle-income countries, where the provision of safe general anaesthesia is often compromised by unreliable supply of electricity and anaesthetic gases. We performed a functional and clinical assessment of this anaesthetic machine, with particular reference to novel features and functioning in the intended environment. The Universal Anaesthesia Machine was found to be reliable, safe and consistent across a range of tests during targeted functional testing. © 2016 The Association of Anaesthetists of Great Britain and Ireland.
Map of Life - A Dashboard for Monitoring Planetary Species Distributions
NASA Astrophysics Data System (ADS)
Jetz, W.
2016-12-01
Geographic information about biodiversity is vital for understanding the many services nature provides and their potential changes, yet remains unreliable and often insufficient. By integrating a wide range of knowledge about species distributions and their dynamics over time, Map of Life supports global biodiversity education, monitoring, research and decision-making. Built on a scalable web platform geared for large biodiversity and environmental data, Map of Life endeavors provides species range information globally and species lists for any area. With data and technology provided by NASA and Google Earth Engine, tools under development use remote sensing-based environmental layers to enable on-the-fly predictions of species distributions, range changes, and early warning signals for threatened species. The ultimate vision is a globally connected, collaborative knowledge- and tool-base for regional and local biodiversity decision-making, education, monitoring, and projection. For currently available tools, more information and to follow progress, go to MOL.org.
NASA Astrophysics Data System (ADS)
Varlamov, V. V.; Ishkhanov, B. S.; Orlin, V. N.
2017-11-01
With the aid of the results obtained by evaluating cross sections of partial photoneutron reactions on the isotope 116Sn and the energy spectra of neutrons originating from these reactions, the possible reasons for the well-known discrepancies between the results of different photonuclear experiments were studied on the basis of a combinedmodel of photonuclear reactions. On the basis of physical criteria of data reliability and an experimental-theoretical method for evaluating cross sections of partial reactions, it was found that these discrepancies were due to unreliably redistributing neutrons between ( γ, 1 n), ( γ, 2 n), and ( γ, 3 n) reactions because of nontrivial correlations between the experimentally measured energy of neutrons and their multiplicity.
Nyitray, Alan G; Harris, Robin B; Abalos, Andrew T; Nielson, Carrie M; Papenfuss, Mary; Giuliano, Anna R
2010-12-01
Accurate knowledge about human sexual behaviors is important for increasing our understanding of human sexuality; however, there have been few studies assessing the reliability of sexual behavior questionnaires designed for community samples of adult men. A test-retest reliability study was conducted on a questionnaire completed by 334 men who had been recruited in Tucson, Arizona. Reliability coefficients and refusal rates were calculated for 39 non-sexual and sexual behavior questionnaire items. Predictors of unreliable reporting for lifetime number of female sexual partners were also assessed. Refusal rates were generally low, with slightly higher refusal rates for questions related to immigration, income, the frequency of sexual intercourse with women, lifetime number of female sexual partners, and the lifetime number of male anal sex partners. Kappa and intraclass correlation coefficients were substantial or almost perfect for all non-sexual and sexual behavior items. Reliability dropped somewhat, but was still substantial, for items that asked about household income and the men's knowledge of their sexual partners' health, including abnormal Pap tests and prior sexually transmitted diseases (STD). Age and lifetime number of female sexual partners were independent predictors of unreliable reporting while years of education was inversely associated with unreliable reporting. These findings among a community sample of adult men are consistent with other test-retest reliability studies with populations of women and adolescents.
A checklist of macroparasites of Liza haematocheila (Temminck & Schlegel) (Teleostei: Mugilidae)
Kostadinova, Aneta
2008-01-01
Background The mugilid fish Liza haematocheila (syn. Mugil soiuy), native to the Western North Pacific, provides opportunities to examine the changes of its parasite fauna after its translocation to the Sea of Azov and subsequent establishment in the Black Sea. However, the information on macroparasites of this host in both ranges of its current distribution comes from isolated studies published in difficult-to-access literature sources. Materials and methods Data from 53 publications, predominantly in Chinese, Russian and Ukrainian, were compiled from an extensive search of the literature and the Host-Parasite Database maintained up to 2005 at the Natural History Museum, London. Results The complete checklist of the metazoan parasites of L. haematocheila throughout its distributional range comprises summarised information for 69 nominal species of helminth and ectoparasitic crustacean parasites, from 45 genera and 27 families (370 host-parasite records in total) and includes the name of the parasite species, the area/locality of the host capture, and the author and date of the published record. The taxonomy is updated and the validity of the records and synonymies are critically evaluated. A comparison of the parasite faunas based on the records in the native and introduced/invasive range of L. haematocheila suggests that a large number of parasite species was 'lost' in the new distributional range whereas an even greater number was 'gained'. Conclusion Although the present checklist provides information that will facilitate future studies, the interesting question of macroparasite faunal diversity in L. haematocheila in its natural and introduced/invasive ranges cannot be dealt with the current data because of unreliability associated with the large number of non-documented and questionable records. This stresses the importance of data quality analysis in using host-parasite database and checklist data. PMID:19117506
A checklist of macroparasites of Liza haematocheila (Temminck & Schlegel) (Teleostei: Mugilidae).
Kostadinova, Aneta
2008-12-31
The mugilid fish Liza haematocheila (syn. Mugil soiuy), native to the Western North Pacific, provides opportunities to examine the changes of its parasite fauna after its translocation to the Sea of Azov and subsequent establishment in the Black Sea. However, the information on macroparasites of this host in both ranges of its current distribution comes from isolated studies published in difficult-to-access literature sources. Data from 53 publications, predominantly in Chinese, Russian and Ukrainian, were compiled from an extensive search of the literature and the Host-Parasite Database maintained up to 2005 at the Natural History Museum, London. The complete checklist of the metazoan parasites of L. haematocheila throughout its distributional range comprises summarised information for 69 nominal species of helminth and ectoparasitic crustacean parasites, from 45 genera and 27 families (370 host-parasite records in total) and includes the name of the parasite species, the area/locality of the host capture, and the author and date of the published record. The taxonomy is updated and the validity of the records and synonymies are critically evaluated. A comparison of the parasite faunas based on the records in the native and introduced/invasive range of L. haematocheila suggests that a large number of parasite species was 'lost' in the new distributional range whereas an even greater number was 'gained'. Although the present checklist provides information that will facilitate future studies, the interesting question of macroparasite faunal diversity in L. haematocheila in its natural and introduced/invasive ranges cannot be dealt with the current data because of unreliability associated with the large number of non-documented and questionable records. This stresses the importance of data quality analysis in using host-parasite database and checklist data.
Dai, Huanping; Micheyl, Christophe
2010-01-01
A major concern when designing a psychophysical experiment is that participants may use another stimulus feature (“cue”) than that intended by the experimenter. One way to avoid this involves applying random variations to the corresponding feature across stimulus presentations, to make the “unwanted” cue unreliable. An important question facing experimenters who use this randomization (“roving”) technique is: How large should the randomization range be to ensure that participants cannot achieve a certain proportion correct (PC) by using the unwanted cue, while at the same time avoiding unnecessary interference of the randomization with task performance? Previous publications have provided formulas for the selection of adequate randomization ranges in yes-no and multiple-alternative, forced-choice tasks. In this article, we provide figures and tables, which can be used to select randomization ranges that are better suited to experiments involving a same-different, dual-pair, or oddity task. PMID:20139466
An Examination of the Neural Unreliability Thesis of Autism
Butler, John S.; Molholm, Sophie; Andrade, Gizely N.; Foxe, John J.
2017-01-01
Abstract An emerging neuropathological theory of Autism, referred to here as “the neural unreliability thesis,” proposes greater variability in moment-to-moment cortical representation of environmental events, such that the system shows general instability in its impulse response function. Leading evidence for this thesis derives from functional neuroimaging, a methodology ill-suited for detailed assessment of sensory transmission dynamics occurring at the millisecond scale. Electrophysiological assessments of this thesis, however, are sparse and unconvincing. We conducted detailed examination of visual and somatosensory evoked activity using high-density electrical mapping in individuals with autism (N = 20) and precisely matched neurotypical controls (N = 20), recording large numbers of trials that allowed for exhaustive time-frequency analyses at the single-trial level. Measures of intertrial coherence and event-related spectral perturbation revealed no convincing evidence for an unreliability account of sensory responsivity in autism. Indeed, results point to robust, highly reproducible response functions marked for their exceedingly close correspondence to those in neurotypical controls PMID:27923839
An Examination of the Neural Unreliability Thesis of Autism.
Butler, John S; Molholm, Sophie; Andrade, Gizely N; Foxe, John J
2017-01-01
An emerging neuropathological theory of Autism, referred to here as "the neural unreliability thesis," proposes greater variability in moment-to-moment cortical representation of environmental events, such that the system shows general instability in its impulse response function. Leading evidence for this thesis derives from functional neuroimaging, a methodology ill-suited for detailed assessment of sensory transmission dynamics occurring at the millisecond scale. Electrophysiological assessments of this thesis, however, are sparse and unconvincing. We conducted detailed examination of visual and somatosensory evoked activity using high-density electrical mapping in individuals with autism (N = 20) and precisely matched neurotypical controls (N = 20), recording large numbers of trials that allowed for exhaustive time-frequency analyses at the single-trial level. Measures of intertrial coherence and event-related spectral perturbation revealed no convincing evidence for an unreliability account of sensory responsivity in autism. Indeed, results point to robust, highly reproducible response functions marked for their exceedingly close correspondence to those in neurotypical controls. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
[Influence of art of changes on the thinking of traditional Chinese medicine].
Gu, Z; Lu, X
2001-07-01
The most important influence of art of changes on traditional Chinese medicine (TCM) was reflected in the formation of the basic theory of TCM. Some innovations were achieved by using the theory of art of changes to research medicine in later ages. However, the specific therapies and the prognostication of diseases inferred by using the art of mathematics were mostly unreliable. Though the researches on the art of changes were helpful to the exploration of the cause and effect of TMC, yet, its practical significance should be evaluated properly.
Renewable Energy for Rural Schools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, A.C.; Lawand, T.
2000-11-28
This publication addresses the need for energy in schools, primarily those schools that are not connected to the electric grid. This guide will apply mostly to primary and secondary schools located in non-electrified areas. In areas where grid power is expensive and unreliable, this guide can be used to examine other energy options to conventional power. The authors' goal is to help the reader to accurately assess a school's energy needs, evaluate appropriate and cost-effective technologies to meet those needs, and to implement an effective infrastructure to install and maintain the hardware.
Selective social learning in infancy: looking for mechanisms.
Crivello, Cristina; Phillips, Sara; Poulin-Dubois, Diane
2018-05-01
Although there is mounting evidence that selective social learning begins in infancy, the psychological mechanisms underlying this ability are currently a controversial issue. The purpose of this study is to investigate whether theory of mind abilities and statistical learning skills are related to infants' selective social learning. Seventy-seven 18-month-olds were first exposed to a reliable or an unreliable speaker and then completed a word learning task, two theory of mind tasks, and a statistical learning task. If domain-general abilities are linked to selective social learning, then infants who demonstrate superior performance on the statistical learning task should perform better on the selective learning task, that is, should be less likely to learn words from an unreliable speaker. Alternatively, if domain-specific abilities are involved, then superior performance on theory of mind tasks should be related to selective learning performance. Findings revealed that, as expected, infants were more likely to learn a novel word from a reliable speaker. Importantly, infants who passed a theory of mind task assessing knowledge attribution were significantly less likely to learn a novel word from an unreliable speaker compared to infants who failed this task. No such effect was observed for the other tasks. These results suggest that infants who possess superior social-cognitive abilities are more apt to reject an unreliable speaker as informant. A video abstract of this article can be viewed at: https://youtu.be/zuuCniHYzqo. © 2017 John Wiley & Sons Ltd.
AutoSyP: A Low-Cost, Low-Power Syringe Pump for Use in Low-Resource Settings.
Juarez, Alexa; Maynard, Kelley; Skerrett, Erica; Molyneux, Elizabeth; Richards-Kortum, Rebecca; Dube, Queen; Oden, Z Maria
2016-10-05
This article describes the design and evaluation of AutoSyP, a low-cost, low-power syringe pump intended to deliver intravenous (IV) infusions in low-resource hospitals. A constant-force spring within the device provides mechanical energy to depress the syringe plunger. As a result, the device can run on rechargeable battery power for 66 hours, a critical feature for low-resource settings where the power grid may be unreliable. The device is designed to be used with 5- to 60-mL syringes and can deliver fluids at flow rates ranging from 3 to 60 mL/hour. The cost of goods to build one AutoSyP device is approximately $500. AutoSyP was tested in a laboratory setting and in a pilot clinical study. Laboratory accuracy was within 4% of the programmed flow rate. The device was used to deliver fluid to 10 healthy adult volunteers and 30 infants requiring IV fluid therapy at Queen Elizabeth Central Hospital in Blantyre, Malawi. The device delivered fluid with an average mean flow rate error of -2.3% ± 1.9% for flow rates ranging from 3 to 60 mL/hour. AutoSyP has the potential to improve the accuracy and safety of IV fluid delivery in low-resource settings. © The American Society of Tropical Medicine and Hygiene.
How much control is enough? Influence of unreliable input on user experience.
van de Laar, Bram; Plass-Oude Bos, Danny; Reuderink, Boris; Poel, Mannes; Nijholt, Anton
2013-12-01
Brain–computer interfaces (BCI) provide a valuable new input modality within human–computer interaction systems. However, like other body-based inputs such as gesture or gaze based systems, the system recognition of input commands is still far from perfect. This raises important questions, such as what level of control should such an interface be able to provide. What is the relationship between actual and perceived control? And in the case of applications for entertainment in which fun is an important part of user experience, should we even aim for the highest level of control, or is the optimum elsewhere? In this paper, we evaluate whether we can modulate the amount of control and if a game can be fun with less than perfect control. In the experiment users (n = 158) played a simple game in which a hamster has to be guided to the exit of a maze. The amount of control the user has over the hamster is varied. The variation of control through confusion matrices makes it possible to simulate the experience of using a BCI, while using the traditional keyboard for input. After each session the user completed a short questionnaire on user experience and perceived control. Analysis of the data showed that the perceived control of the user could largely be explained by the amount of control in the respective session. As expected, user frustration decreases with increasing control. Moreover, the results indicate that the relation between fun and control is not linear. Although at lower levels of control fun does increase with improved control, the level of fun drops just before perfect control is reached (with an optimum around 96%). This poses new insights for developers of games who want to incorporate some form of BCI or other modality with unreliable input in their game: for creating a fun game, unreliable input can be used to create a challenge for the user.
Clinical height measurements are unreliable: a call for improvement.
Mikula, A L; Hetzel, S J; Binkley, N; Anderson, P A
2016-10-01
Height measurements are currently used to guide imaging decisions that assist in osteoporosis care, but their clinical reliability is largely unknown. We found both clinical height measurements and electronic health record height data to be unreliable. Improvement in height measurement is needed to improve osteoporosis care. The aim of this study is to assess the accuracy and reliability of clinical height measurement in a university healthcare clinical setting. Electronic health record (EHR) review, direct measurement of clinical stadiometer accuracy, and observation of staff height measurement technique at outpatient facilities of the University of Wisconsin Hospital and Clinics. We examined 32 clinical stadiometers for reliability and observed 34 clinic staff perform height measurements at 12 outpatient primary care and specialty clinics. An EHR search identified 4711 men and women age 43 to 89 with no known metabolic bone disease who had more than one height measurement over 3 months. The short study period and exclusion were selected to evaluate change in recorded height not due to pathologic processes. Mean EHR recorded height change (first to last measurement) was -0.02 cm (SD 1.88 cm). Eighteen percent of patients had height measurement differences noted in the EHR of ≥2 cm over 3 months. The technical error of measurement (TEM) was 1.77 cm with a relative TEM of 1.04 %. None of the staff observed performing height measurements followed all recommended height measurement guidelines. Fifty percent of clinic staff reported they on occasion enter patient reported height into the EHR rather than performing a measurement. When performing direct measurements on stadiometers, the mean difference from a gold standard length was 0.24 cm (SD 0.80). Nine percent of stadiometers examined had an error of >1.5 cm. Clinical height measurements and EHR recorded height results are unreliable. Improvement in this measure is needed as an adjunct to improve osteoporosis care.
Three essays on environmental and natural resource economics
NASA Astrophysics Data System (ADS)
Wang, Qiong (Juliana)
The doctoral dissertation is composed of three chapters on the governance of water and electricity infrastructure in China. All three chapters focus on the nexus of economy, environment, and energy. The first chapter studies the relationship of decentralization policies and the provision of public goods in the context of urban water services in China. Different degree of externalities of the public goods may affect the efficacy of decentralization policies. Using a comprehensive 2004 dataset for all the 661 cities, I measure how the clean water supply coverage rate and the wastewater treatment rate respond to these policies respectively. Results show that cities respond positively in their piped water supply coverage but not as well in their wastewater treatment, whereas they both respond positively to the mandatory information disclosure policy. The efficacy of decentralization policy is indeed compromised when externalities exist beyond the jurisdiction as suggested by the case of wastewater. Information disclosure policy, a motivational tool tied to the promotion of local officials, is shown to provide strong incentives for water services irrespective of their externalities. Private sector participation lowers the amount of government grant in the water sector but increases the tariff charged to customers. The second chapter of the dissertation examines whether competition reduces cost in the restructuring of the Chinese power sector. Although competition may reduce cost through technological innovation and advancement and diversification of ownership, higher transaction cost and price control may hinder its effectiveness. In this chapter, I describe the various restructuring programs over the years that affect the power plants. Then, I evaluate their impacts on the cost efficiency, measured by the factor demand of the power plants - labor, energy and materials. Using an industrial dataset from 1997 to 2004 of energy consuming coal power plants from the National Statistics Bureau, I first estimate the factor demand equations following the model developed in Fabrizio et al. (2007) to compare with the results from similar studies in the United States. Further, I model the cost structure of Chinese power plants using a more flexible translog specification. The results from these two models confirm the validity of the assumptions made based on the industry characteristics. The power plants located in the South reduced their labor demand after the Southern Grid separated from the National Grid in 2002. The third chapter examines how the unreliability of inputs affects productivity. Specifically, it studies how Chinese industrial enterprises respond to the unreliability of electric power. Since 2002, electricity blackouts have been hampering the industrial customers in China. Using a survey dataset of the National Statistics Bureau on eleven industries across the nation from 1999 to 2004 and an electricity dataset compiled from Electricity Yearbooks, my co-authors and I estimate the cost of power unreliability by quantifying the factor-neutral and the factor-biased productivity effects. Incorporating unreliability proxies into a flexible translog cost function and the value share equations, we estimate the whole system using seemingly unrelated regressions (SUREG) with cross equation constraints. We also calculate the marginal effect of factor unreliability on cost and on carbon emissions based on these estimates.
NASA Technical Reports Server (NTRS)
Sallee, G. P.
1973-01-01
The advanced technology requirements for an advanced high speed commercial transport engine are presented. The results of the phase 2 study effort cover the following areas: (1) general review of preliminary engine designs suggested for a future aircraft, (2) presentation of a long range view of airline propulsion system objectives and the research programs in noise, pollution, and design which must be undertaken to achieve the goals presented, (3) review of the impact of propulsion system unreliability and unscheduled maintenance on cost of operation, (4) discussion of the reliability and maintainability requirements and guarantees for future engines.
Micro-Dose Calibrator for Pre-clinical Radiotracer Assays | NCI Technology Transfer Center | TTC
Pre-clinical radiotracer biomedical research involves the use of compounds labeled with radioisotopes, including cell binding studies, immune cell labeling techniques, and radio-ligand bio-distribution studies. Before this Micro-Dose Calibrator, measurement of pre-clinical level dosage for small animal studies was inaccurate and unreliable. This dose calibrator is a prototype ready for manufacturing. It is designed to accurately measure radioactive doses in the range of 50 nCi (1.8 kBq) to 100 µCi (3.7 MBq) with 1% precision. The NCI seeks co-development or licensing to commercialize it. Alternative uses will be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditroi, F.; Tarkanyi, F.; Csikai, J.
2005-05-24
Iron is one of the most important structural materials in every field of science, technology, industry, etc. Its application in a radiating environment requires the knowledge of accurate excitation functions for the possible reactions in question. By using the Thin Layer Activation technique (TLA) the knowledge of such data is also extremely important even in the case of relative measurements to design the irradiation (irradiation energy, beam intensity, duration) and also for radioactive safety estimations. The cross sections are frequently measured at low energies but there are unsatisfactory and unreliable data in the energy range above 40 MeV.
NASA Astrophysics Data System (ADS)
Ditrói, F.; Tárkányi, F.; Csikai, J.; Uddin, M. S.; Hagiwara, M.; Baba, M.
2005-05-01
Iron is one of the most important structural materials in every field of science, technology, industry, etc. Its application in a radiating environment requires the knowledge of accurate excitation functions for the possible reactions in question. By using the Thin Layer Activation technique (TLA) the knowledge of such data is also extremely important even in the case of relative measurements to design the irradiation (irradiation energy, beam intensity, duration) and also for radioactive safety estimations. The cross sections are frequently measured at low energies but there are unsatisfactory and unreliable data in the energy range above 40 MeV.
Novel application of species richness estimators to predict the host range of parasites.
Watson, David M; Milner, Kirsty V; Leigh, Andrea
2017-01-01
Host range is a critical life history trait of parasites, influencing prevalence, virulence and ultimately determining their distributional extent. Current approaches to measure host range are sensitive to sampling effort, the number of known hosts increasing with more records. Here, we develop a novel application of results-based stopping rules to determine how many hosts should be sampled to yield stable estimates of the number of primary hosts within regions, then use species richness estimation to predict host ranges of parasites across their distributional ranges. We selected three mistletoe species (hemiparasitic plants in the Loranthaceae) to evaluate our approach: a strict host specialist (Amyema lucasii, dependent on a single host species), an intermediate species (Amyema quandang, dependent on hosts in one genus) and a generalist (Lysiana exocarpi, dependent on many genera across multiple families), comparing results from geographically-stratified surveys against known host lists derived from herbarium specimens. The results-based stopping rule (stop sampling bioregion once observed host richness exceeds 80% of the host richness predicted using the Abundance-based Coverage Estimator) worked well for most bioregions studied, being satisfied after three to six sampling plots (each representing 25 host trees) but was unreliable in those bioregions with high host richness or high proportions of rare hosts. Although generating stable predictions of host range with minimal variation among six estimators trialled, distribution-wide estimates fell well short of the number of hosts known from herbarium records. This mismatch, coupled with the discovery of nine previously unrecorded mistletoe-host combinations, further demonstrates the limited ecological relevance of simple host-parasite lists. By collecting estimates of host range of constrained completeness, our approach maximises sampling efficiency while generating comparable estimates of the number of primary hosts, with broad applicability to many host-parasite systems. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.
Effects of imperfect automation on decision making in a simulated command and control task.
Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja
2007-02-01
Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth A.
2018-01-01
The following are updated or new subjects added to the FPGA SEE Test Guidelines manual: academic versus mission specific device evaluation, single event latch-up (SEL) test and analysis, SEE response visibility enhancement during radiation testing, mitigation evaluation (embedded and user-implemented), unreliable design and its affects to SEE Data, testing flushable architectures versus non-flushable architectures, intellectual property core (IP Core) test and evaluation (addresses embedded and user-inserted), heavy-ion energy and linear energy transfer (LET) selection, proton versus heavy-ion testing, fault injection, mean fluence to failure analysis, and mission specific system-level single event upset (SEU) response prediction. Most sections within the guidelines manual provide information regarding best practices for test structure and test system development. The scope of this manual addresses academic versus mission specific device evaluation and visibility enhancement in IP Core testing.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth A.
2018-01-01
The following are updated or new subjects added to the FPGA SEE Test Guidelines manual: academic versus mission specific device evaluation, single event latch-up (SEL) test and analysis, SEE response visibility enhancement during radiation testing, mitigation evaluation (embedded and user-implemented), unreliable design and its affects to SEE Data, testing flushable architectures versus non-flushable architectures, intellectual property core (IP Core) test and evaluation (addresses embedded and user-inserted), heavy-ion energy and linear energy transfer (LET) selection, proton versus heavy-ion testing, fault injection, mean fluence to failure analysis, and mission specific system-level single event upset (SEU) response prediction. Most sections within the guidelines manual provide information regarding best practices for test structure and test system development. The scope of this manual addresses academic versus mission specific device evaluation and visibility enhancement in IP Core testing.
Memory, metamemory, and social cues: Between conformity and resistance.
Zawadzka, Katarzyna; Krogulska, Aleksandra; Button, Roberta; Higham, Philip A; Hanczakowski, Maciej
2016-02-01
When presented with responses of another person, people incorporate these responses into memory reports: a finding termed memory conformity. Research on memory conformity in recognition reveals that people rely on external social cues to guide their memory responses when their own ability to respond is at chance. In this way, conforming to a reliable source boosts recognition performance but conforming to a random source does not impair it. In the present study we assessed whether people would conform indiscriminately to reliable and unreliable (random) sources when they are given the opportunity to exercise metamemory control over their responding by withholding answers in a recognition test. In Experiments 1 and 2, we found the pattern of memory conformity to reliable and unreliable sources in 2 variants of a free-report recognition test, yet at the same time the provision of external cues did not affect the rate of response withholding. In Experiment 3, we provided participants with initial feedback on their recognition decisions, facilitating the discrimination between the reliable and unreliable source. This led to the reduction of memory conformity to the unreliable source, and at the same time modulated metamemory decisions concerning response withholding: participants displayed metamemory conformity to the reliable source, volunteering more responses in their memory report, and metamemory resistance to the random source, withholding more responses from the memory report. Together, the results show how metamemory decisions dissociate various types of memory conformity and that memory and metamemory decisions can be independent of each other. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke
NASA Technical Reports Server (NTRS)
Yen, C. L.; Smith, D. B.
1973-01-01
A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
p-Curve and p-Hacking in Observational Research
Bruns, Stephan B.; Ioannidis, John P. A.
2016-01-01
The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable. PMID:26886098
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Replication Unreliability in Psychology: Elusive Phenomena or “Elusive” Statistical Power?
Tressoldi, Patrizio E.
2012-01-01
The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size (ES) of the typical study, is low or very low. The low power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES. PMID:22783215
Low sulfur content in submarine lavas: an unreliable indicator of subaerial eruption
Davis, A.S.; Clague, D.A.; Schulz, M.S.; Hein, J.R.
1991-01-01
Low S content (<250 ppm) has been used to identify subaerially erupted Hawaiian and Icelandic lavas. Large differences in S content of submarine-erupted lavas from different tectonic settings indicate that the behavior of S is complex. Variations in S abundance in undegassed, submarine-erupted lavas can result from different source compositions, different percentages of partial melting, and crystal fractionation. Low S concentrations in highly vesicular submarine lavas suggest that partial degassing can occur despite great hydrostatic pressure. These processes need to be evaluated before using S content as an indicator of eruption depth. -Authors
VIPER: a visualisation tool for exploring inheritance inconsistencies in genotyped pedigrees
2012-01-01
Background Pedigree genotype datasets are used for analysing genetic inheritance and to map genetic markers and traits. Such datasets consist of hundreds of related animals genotyped for thousands of genetic markers and invariably contain multiple errors in both the pedigree structure and in the associated individual genotype data. These errors manifest as apparent inheritance inconsistencies in the pedigree, and invalidate analyses of marker inheritance patterns across the dataset. Cleaning raw datasets of bad data points (incorrect pedigree relationships, unreliable marker assays, suspect samples, bad genotype results etc.) requires expert exploration of the patterns of exposed inconsistencies in the context of the inheritance pedigree. In order to assist this process we are developing VIPER (Visual Pedigree Explorer), a software tool that integrates an inheritance-checking algorithm with a novel space-efficient pedigree visualisation, so that reported inheritance inconsistencies are overlaid on an interactive, navigable representation of the pedigree structure. Methods and results This paper describes an evaluation of how VIPER displays the different scales and types of dataset that occur experimentally, with a description of how VIPER's display interface and functionality meet the challenges presented by such data. We examine a range of possible error types found in real and simulated pedigree genotype datasets, demonstrating how these errors are exposed and explored using the VIPER interface and we evaluate the utility and usability of the interface to the domain expert. Evaluation was performed as a two stage process with the assistance of domain experts (geneticists). The initial evaluation drove the iterative implementation of further features in the software prototype, as required by the users, prior to a final functional evaluation of the pedigree display for exploring the various error types, data scales and structures. Conclusions The VIPER display was shown to effectively expose the range of errors found in experimental genotyped pedigrees, allowing users to explore the underlying causes of reported inheritance inconsistencies. This interface will provide the basis for a full data cleaning tool that will allow the user to remove isolated bad data points, and reversibly test the effect of removing suspect genotypes and pedigree relationships. PMID:22607476
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
Photovoltaics as an operating energy system
NASA Astrophysics Data System (ADS)
Jones, G. J.; Post, H. N.; Thomas, M. G.
In the short time since the discovery of the modern solar cell in 1954, terrestrial photovoltaic power system technology has matured in all areas, from collector reliability to system and subsystem design and operations. Today's PV systems are finding widespread use in powering loads where conventional sources are either unavailable, unreliable, or too costly. A broad range of applications is possible because of the modularity of the technology---it can be used to power loads ranging from less than a watt to several megawatts. This inherent modularity makes PV an excellent choice to play a major role in rural electrification in the developing world. The future for grid-connected photovoltaic systems is also very promising. Indications are that several of today's technologies, at higher production rates and in megawatt-sized installations, will generate electricity in the vicinity of $0.12/kWh in the near future.
The use of bioimpedance analysis to evaluate lymphedema.
Warren, Anne G; Janz, Brian A; Slavin, Sumner A; Borud, Loren J
2007-05-01
Lymphedema, a chronic disfiguring condition resulting from lymphatic dysfunction or disruption, can be difficult to accurately diagnose and manage. Of particular challenge is identifying the presence of clinically significant limb swelling through simple and noninvasive methods. Many historical and currently used techniques for documenting differences in limb volume, including volume displacement and circumferential measurements, have proven difficult and unreliable. Bioimpedance spectroscopy analysis, a technology that uses resistance to electrical current in comparing the composition of fluid compartments within the body, has been considered as a cost-effective and reproducible alternative for evaluating patients with suspected lymphedema. All patients were recruited through the Beth Israel Deaconess Medical Center Lymphedema Clinic. A total of 15 patients (mean age: 55.2 years) with upper-extremity or lower-extremity lymphedema as documented by lymphoscintigraphy underwent bioimpedance spectroscopy analysis using an Impedimed SFB7 device. Seven healthy medical students and surgical residents (mean age: 26.9 years) were selected to serve as normal controls. All study participants underwent analysis of both limbs, which allowed participants to act as their own controls. The multifrequency bioimpedance device documented impedance values for each limb, with lower values correlating with higher levels of accumulated protein-rich edematous fluid. The average ratio of impedance to current flow of the affected limb to the unaffected limb in lymphedema patients was 0.9 (range: 0.67 to 1.01). In the control group, the average impedance ratio of the participant's dominant limb to their nondominant limb was 0.99 (range: 0.95 to 1.02) (P = 0.01). Bioimpedance spectroscopy can be used as a reliable and accurate tool for documenting the presence of lymphedema in patients with either upper- or lower-extremity swelling. Measurement with the device is quick and simple and results are reproducible among patients. Given significant limitations with other methods of evaluating lymphedema, the use of bioimpedance analysis may aid in the diagnosis of lymphedema and allow for tracking patients over time as they proceed with treatment of their disease.
Measurement of Shoulder Range of Motion in Patients with Adhesive Capsulitis Using a Kinect
Chung, Sun Gun; Kim, Hee Chan; Kwak, Youngbin; Park, Hee-won; Kim, Keewon
2015-01-01
Range of motion (ROM) measurements are essential for the evaluation for and diagnosis of adhesive capsulitis of the shoulder (AC). However, taking these measurements using a goniometer is inconvenient and sometimes unreliable. The Kinect (Microsoft, Seattle, WA, USA) is gaining attention as a new motion detecting device that is nonintrusive and easy to implement. This study aimed to apply Kinect to measure shoulder ROM in AC; we evaluated its validity by calculating the agreement of the measurements obtained using Kinect with those obtained using goniometer and assessed its utility for the diagnosis of AC. Both shoulders of 15 healthy volunteers and affected shoulders of 12 patients with AC were included in the study. The passive and active ROM of each were measured with a goniometer for flexion, abduction, and external rotation. Their active shoulder motions for each direction were again captured using Kinect and the ROM values were calculated. The agreement between the two measurements was tested with the intraclass correlation coefficient (ICC). Diagnostic performance using the Kinect ROM was evaluated with Cohen’s kappa value. The cutoff values of the limited ROM were determined in the following ways: the same as passive ROM values, reflecting the mean difference, and based on receiver operating characteristic curves. The ICC for flexion/abduction/external rotation between goniometric passive ROM and the Kinect ROM were 0.906/0.942/0.911, while those between active ROMs and the Kinect ROMs were 0.864/0.932/0.925. Cohen’s kappa values were 0.88, 0.88, and 1.0 with the cutoff values in the order above. Measurements of the shoulder ROM using Kinect show excellent agreement with those taken using a goniometer. These results indicate that the Kinect can be used to measure shoulder ROM and to diagnose AC as an alternative to goniometer. PMID:26107943
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016–17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements. PMID:29240796
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
Watershed Models for Decision Support for Inflows to Potholes Reservoir, Washington
Mastin, Mark C.
2009-01-01
A set of watershed models for four basins (Crab Creek, Rocky Ford Creek, Rocky Coulee, and Lind Coulee), draining into Potholes Reservoir in east-central Washington, was developed as part of a decision support system to aid the U.S. Department of the Interior, Bureau of Reclamation, in managing water resources in east-central Washington State. The project is part of the U.S. Geological Survey and Bureau of Reclamation collaborative Watershed and River Systems Management Program. A conceptual model of hydrology is outlined for the study area that highlights the significant processes that are important to accurately simulate discharge under a wide range of conditions. The conceptual model identified the following factors as significant for accurate discharge simulations: (1) influence of frozen ground on peak discharge, (2) evaporation and ground-water flow as major pathways in the system, (3) channel losses, and (4) influence of irrigation practices on reducing or increasing discharge. The Modular Modeling System was used to create a watershed model for the four study basins by combining standard Precipitation Runoff Modeling System modules with modified modules from a previous study and newly modified modules. The model proved unreliable in simulating peak-flow discharge because the index used to track frozen ground conditions was not reliable. Mean monthly and mean annual discharges were more reliable when simulated. Data from seven USGS streamflow-gaging stations were used to compare with simulated discharge for model calibration and evaluation. Mean annual differences between simulated and observed discharge varied from 1.2 to 13.8 percent for all stations used in the comparisons except one station on a regional ground-water discharge stream. Two thirds of the mean monthly percent differences between the simulated mean and the observed mean discharge for these six stations were between -20 and 240 percent, or in absolute terms, between -0.8 and 11 cubic feet per second. A graphical user interface was developed for the user to easily run the model, make runoff forecasts, and evaluate the results. The models; however, are not reliable for managing short-term operations because of their demonstrated inability to match individual storm peaks and individual monthly discharge values. Short-term forecasting may be improved with real-time monitoring of the extent of frozen ground and the snow-water equivalent in the basin. Despite the models unreliability for short-term runoff forecasts, they are useful in providing long-term, time-series discharge data where no observed data exist.
Cannata, Antonio; Carrara, Fabiola; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Prandini, Silvia; Ene-Iordache, Bogdan; Diadei, Olimpia; Perico, Norberto; Ondei, Patrizia; Pisani, Antonio; Buongiorno, Erasmo; Messa, Piergiorgio; Dugo, Mauro; Remuzzi, Giuseppe
2012-01-01
Trials failed to demonstrate protective effects of investigational treatments on glomerular filtration rate (GFR) reduction in Autosomal Dominant Polycystic Kidney Disease (ADPKD). To assess whether above findings were explained by unreliable GFR estimates, in this academic study we compared GFR values centrally measured by iohexol plasma clearance with corresponding values estimated by Chronic Kidney Disease Epidemiology Collaboration (CKD-Epi) and abbreviated Modification of Diet in Renal Disease (aMDRD) formulas in ADPKD patients retrieved from four clinical trials run by a Clinical Research Center and five Nephrology Units in Italy. Measured baseline GFRs and one-year GFR changes averaged 78.6±26.7 and 8.4±10.3 mL/min/1.73 m2 in 111 and 71 ADPKD patients, respectively. CKD-Epi significantly overestimated and aMDRD underestimated baseline GFRs. Less than half estimates deviated by <10% from measured values. One-year estimated GFR changes did not detect measured changes. Both formulas underestimated GFR changes by 50%. Less than 9% of estimates deviated <10% from measured changes. Extent of deviations even exceeded that of measured one-year GFR changes. In ADPKD, prediction formulas unreliably estimate actual GFR values and fail to detect their changes over time. Direct kidney function measurements by appropriate techniques are needed to adequately evaluate treatment effects in clinics and research. PMID:22393413
Morphometry Based on Effective and Accurate Correspondences of Localized Patterns (MEACOLP)
Wang, Hu; Ren, Yanshuang; Bai, Lijun; Zhang, Wensheng; Tian, Jie
2012-01-01
Local features in volumetric images have been used to identify correspondences of localized anatomical structures for brain morphometry. However, the correspondences are often sparse thus ineffective in reflecting the underlying structures, making it unreliable to evaluate specific morphological differences. This paper presents a morphometry method (MEACOLP) based on correspondences with improved effectiveness and accuracy. A novel two-level scale-invariant feature transform is used to enhance the detection repeatability of local features and to recall the correspondences that might be missed in previous studies. Template patterns whose correspondences could be commonly identified in each group are constructed to serve as the basis for morphometric analysis. A matching algorithm is developed to reduce the identification errors by comparing neighboring local features and rejecting unreliable matches. The two-sample t-test is finally adopted to analyze specific properties of the template patterns. Experiments are performed on the public OASIS database to clinically analyze brain images of Alzheimer's disease (AD) and normal controls (NC). MEACOLP automatically identifies known morphological differences between AD and NC brains, and characterizes the differences well as the scaling and translation of underlying structures. Most of the significant differences are identified in only a single hemisphere, indicating that AD-related structures are characterized by strong anatomical asymmetry. In addition, classification trials to differentiate AD subjects from NC confirm that the morphological differences are reliably related to the groups of interest. PMID:22540000
Govindan, Rathinaswamy B; Al-Shargabi, Tareq; Massaro, An N; Metzler, Marina; Andescavage, Nickie N; Joshi, Radhika; Dave, Rhiya; du Plessis, Adre
2016-06-01
Cerebral pressure passivity (CPP) in sick newborns can be detected by evaluating coupling between mean arterial pressure (MAP) and cerebral blood flow measured by near infra-red spectroscopy hemoglobin difference (HbD). However, continuous MAP monitoring requires invasive catheterization with its inherent risks. We tested whether heart rate (HR) could serve as a reliable surrogate for MAP in the detection of CPP in sick newborns. Continuous measurements of MAP, HR, and HbD were made and partitioned into 10-min epochs. Spectral coherence (COH) was computed between MAP and HbD (COHMAP-HbD) to detect CPP, between HR and HbD (COHHR-HbD) for comparison, and between MAP and HR (COHMAP-HR) to quantify baroreflex function (BRF). The agreement between COHMAP-HbD and COHHR-HbD was assessed using ROC analysis. We found poor agreement between COHMAP-HbD and COHHR-HbD in left hemisphere (area under the ROC curve (AUC) 0.68) and right hemisphere (AUC 0.71). Baroreflex failure (COHMAP-HR not significant) was present in 79% of epochs. Confining comparison to epochs with intact BRF showed an AUC of 0.85 for both hemispheres. In these sick newborns, HR was an unreliable surrogate for MAP required for the detection of CPP. This is likely due to the prevalence of BRF failure in these infants.
Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie
2008-01-01
Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.
Evaluation of identifier field agreement in linked neonatal records.
Hall, E S; Marsolo, K; Greenberg, J M
2017-08-01
To better address barriers arising from missing and unreliable identifiers in neonatal medical records, we evaluated agreement and discordance among traditional and non-traditional linkage fields within a linked neonatal data set. The retrospective, descriptive analysis represents infants born from 2013 to 2015. We linked children's hospital neonatal physician billing records to newborn medical records originating from an academic delivery hospital and evaluated rates of agreement, discordance and missingness for a set of 12 identifier field pairs used in the linkage algorithm. We linked 7293 of 7404 physician billing records (98.5%), all of which were deemed valid upon manual review. Linked records contained a mean of 9.1 matching and 1.6 non-matching identifier pairs. Only 4.8% had complete agreement among all 12 identifier pairs. Our approach to selection of linkage variables and data formatting preparatory to linkage have generalizability, which may inform future neonatal and perinatal record linkage efforts.
NASA Astrophysics Data System (ADS)
Liou, Cheng-Dar
2015-09-01
This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.
Robust detection of heartbeats using association models from blood pressure and EEG signals.
Jeon, Taegyun; Yu, Jongmin; Pedrycz, Witold; Jeon, Moongu; Lee, Boreom; Lee, Byeongcheol
2016-01-15
The heartbeat is fundamental cardiac activity which is straightforwardly detected with a variety of measurement techniques for analyzing physiological signals. Unfortunately, unexpected noise or contaminated signals can distort or cut out electrocardiogram (ECG) signals in practice, misleading the heartbeat detectors to report a false heart rate or suspend itself for a considerable length of time in the worst case. To deal with the problem of unreliable heartbeat detection, PhysioNet/CinC suggests a challenge in 2014 for developing robust heart beat detectors using multimodal signals. This article proposes a multimodal data association method that supplements ECG as a primary input signal with blood pressure (BP) and electroencephalogram (EEG) as complementary input signals when input signals are unreliable. If the current signal quality index (SQI) qualifies ECG as a reliable input signal, our method applies QRS detection to ECG and reports heartbeats. Otherwise, the current SQI selects the best supplementary input signal between BP and EEG after evaluating the current SQI of BP. When BP is chosen as a supplementary input signal, our association model between ECG and BP enables us to compute their regular intervals, detect characteristics BP signals, and estimate the locations of the heartbeat. When both ECG and BP are not qualified, our fusion method resorts to the association model between ECG and EEG that allows us to apply an adaptive filter to ECG and EEG, extract the QRS candidates, and report heartbeats. The proposed method achieved an overall score of 86.26 % for the test data when the input signals are unreliable. Our method outperformed the traditional method, which achieved 79.28 % using QRS detector and BP detector from PhysioNet. Our multimodal signal processing method outperforms the conventional unimodal method of taking ECG signals alone for both training and test data sets. To detect the heartbeat robustly, we have proposed a novel multimodal data association method of supplementing ECG with a variety of physiological signals and accounting for the patient-specific lag between different pulsatile signals and ECG. Multimodal signal detectors and data-fusion approaches such as those proposed in this article can reduce false alarms and improve patient monitoring.
The Clinical Evaluation of Alcohol Intoxication Is Inaccurate in Trauma Patients.
Kumar, Ashwini; Holloway, Travis; Cohn, Stephen M; Goodwiler, Gregory; Admire, John R
2018-02-14
Discharging patients from emergency centers based on the clinical features of intoxication alone may be dangerous, as these may poorly correlate with ethanol measurements. We determined the feasibility of utilizing a hand-held breath alcohol analyzer to aid in the disposition of intoxicated trauma patients by comparing serial breathalyzer (Intoximeter, Alco-Sensor FST, St. Louis, Missouri, USA] data with clinical assessments in determining the readiness of trauma patients for discharge. A total of 20 legally intoxicated (LI) patients (blood alcohol concentration (BAC) >80 mg/dL) brought to our trauma center were prospectively investigated. Serial breath samples were obtained using a breathalyzer as a surrogate measure of repeated BAC. A clinical exam (nystagmus, one-leg balance, heel-toe walk) was performed prior to each breath sampling. The enrollees were 85% male, age 30±10 (range 19-51), with a body mass index (BMI) of 29±7. The average initial body alcohol level (BAL) was 245±61 (range 162-370) mg/dL. Based on breath samples, the alcohol elimination rates varied from 21.5 mg/dL/hr to 45.7 mg/dL/hr (mean 28.5 mg/dL/hr). There were no significant differences in alcohol elimination rates by gender, age, or BMI. The clinical exam also varied widely among patients; only seven of 16 (44%) LI patients demonstrated horizontal nystagmus (suggesting sobriety when actually LI) and the majority of the LI patients (66%) were able to complete the balance tasks (suggesting sobriety). Intoxicated trauma patients have an unreliable clinical sobriety exam and a wide range of alcohol elimination rates. The portable alcohol breath analyzer represents a potential option to easily and inexpensively establish legal sobriety in this population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandes, Annemarie T.; Apisarnthanarax, Smith; Yin, Lingshu
Purpose: To compare the extent of tumor motion between 4-dimensional CT (4DCT) and cine-MRI in patients with hepatic tumors treated with radiation therapy. Methods and Materials: Patients with liver tumors who underwent 4DCT and 2-dimensional biplanar cine-MRI scans during simulation were retrospectively reviewed to determine the extent of target motion in the superior–inferior, anterior–posterior, and lateral directions. Cine-MRI was performed over 5 minutes. Tumor motion from MRI was determined by tracking the centroid of the gross tumor volume using deformable image registration. Motion estimates from 4DCT were performed by evaluation of the fiducial, residual contrast (or liver contour) positions in eachmore » CT phase. Results: Sixteen patients with hepatocellular carcinoma (n=11), cholangiocarcinoma (n=3), and liver metastasis (n=2) were reviewed. Cine-MRI motion was larger than 4DCT for the superior–inferior direction in 50% of patients by a median of 3.0 mm (range, 1.5-7 mm), the anterior–posterior direction in 44% of patients by a median of 2.5 mm (range, 1-5.5 mm), and laterally in 63% of patients by a median of 1.1 mm (range, 0.2-4.5 mm). Conclusions: Cine-MRI frequently detects larger differences in hepatic intrafraction tumor motion when compared with 4DCT most notably in the superior–inferior direction, and may be useful when assessing the need for or treating without respiratory management, particularly in patients with unreliable 4DCT imaging. Margins wider than the internal target volume as defined by 4DCT were required to encompass nearly all the motion detected by cine-MRI for some of the patients in this study.« less
Takeshita, Kazutaka; Ikeda, Takashi; Takahashi, Hiroshi; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko; Kaji, Koichi
2016-01-01
Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.
Utility of Squeeze Flow in the Food Industry
NASA Astrophysics Data System (ADS)
Huang, T. A.
2008-07-01
Squeeze flow for obtaining shear viscosity on Newtonian and non-Newtonian fluids has long been established in the literature. Rotational shear flow using cone/plate, a set of parallel plates, or concentric cylinders all develop wall slip, shear fracture, or instability on food related materials such as peanut butter or mayonnaise. Viscosity data obtained using any one of the above mentioned set-ups is suspect or potentially results in significant error. They are unreliable to support or predict the textural differences perceived by consumer evaluation. RMS-800, from Rheometrics Inc., was employed to conduct the squeezing flow under constant speeds on a set of parallel plates. Viscosity data, over a broad range of shear rates, is compared between Hellmann's real (HRM) and light mayonnaise (HLM). The Consistency and shear-thinning indices, as defined in the Power-Law Model, were determined. HRM exhibits a more pronounced shear-thinning when compared to HLM yet the Consistency of HRM is significantly higher. Sensory evaluation by a trained expert panel ranked that adhesiveness and cohesiveness of HLM are significantly higher. It appears that the degree of shear thinning is one of the key rheological parameters in predicting the above mentioned difference in textural attributes. Error involved in determining viscosity from non-parallelism between two plates can be significant to affect the accuracy of the viscosity, in particular, shear-thinning index. Details are a subject for the next presentation. Nevertheless, the method is proven to be fast, rugged, simple, and reliable. It can be developed as a QC tool.
Impact of hyperhidrosis on quality of life and its assessment.
Hamm, Henning
2014-10-01
Hyperhidrosis is an embarrassing condition that may interfere with routine activities, cause emotional distress, and disturb both professional and social lives of patients. Objective examination is variable and unreliable, so efforts have been made in the last 15 years to substantiate the limitations of these patients, especially in primary focal hyperhidrosis. Almost all therapeutic studies use standardized or self-designed instruments to evaluate the impact of the disease on quality of life and the improvement achieved by treatment. This article gives an overview of the difficulties with which patients with hyperhidrosis are confronted and of research investigating the restrictions. Copyright © 2014 Elsevier Inc. All rights reserved.
Evaluating the Cost-Benefits of Utilizing Host Nation Power for US Military Bases
2016-01-29
treated. In fact, even the most unreliable host nation grids almost always have a higher availability then solar PV , which has at best a 30% capacity...fuel supply. In this way, HN power is like other intermittent sources such as solar or wind power. HN power should not be thought of as a
Electron microprobe evaluation of terrestrial basalts for whole-rock K-Ar dating
Mankinen, E.A.; Brent, Dalrymple G.
1972-01-01
Four basalt samples for whole-rock K-Ar dating were analyzed with an electron microprobe to locate potassium concentrations. Highest concentrations of potassium were found in those mineral phases which were the last to crystallize. The two reliable samples had potassium concentrated in fine-grained interstitial feldspar and along grain boundaries of earlier formed plagioclase crystals. The two unreliable samples had potassium concentrated in the glassy matrix, demonstrating the ineffectiveness of basaltic glass as a retainer of radiogenic argon. In selecting basalt samples for whole-rock K-Ar dating, particular emphasis should be placed on determining the nature and condition of the fine-grained interstitial phases. ?? 1972.
CARE 3 user-friendly interface user's guide
NASA Technical Reports Server (NTRS)
Martensen, A. L.
1987-01-01
CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of antineoplastic drugs on Lactobacillus casei and radioisotopic assays for serum folate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmel, R.
1978-02-01
Microbiologic assay, usually employing Lactobacillus casei, remains the standard assay for serum folate to date. Among its disadvantages have been falsely low results in patients receiving bacteriostatic agents such as antibiotics. This study examined whether commonly used antineoplastic drugs had similar effect. Methotrexate and 5-fluorouracil depressed microbiologic serum folate levels. No effect was found for adriamycin, bleomycin, BCNU, cyclophosphamide, cytosine arabinoside, vincristine, vinblastine, mechlorethamine, mithramycin, hydroxyurea, and hydrocortisone. None of the drugs affected radioassay except methotrexate, which produced falsely high folate results. Thus, it appears that L. casei assay for folate becomes unreliable in patients receiving 5-fluorouracil and radioisotopic assaymore » becomes unreliable in those receiving methotrexate.« less
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
Aseptic and Bacterial Meningitis: Evaluation, Treatment, and Prevention.
Mount, Hillary R; Boyle, Sean D
2017-09-01
The etiologies of meningitis range in severity from benign and self-limited to life-threatening with potentially severe morbidity. Bacterial meningitis is a medical emergency that requires prompt recognition and treatment. Mortality remains high despite the introduction of vaccinations for common pathogens that have reduced the incidence of meningitis worldwide. Aseptic meningitis is the most common form of meningitis with an annual incidence of 7.6 per 100,000 adults. Most cases of aseptic meningitis are viral and require supportive care. Viral meningitis is generally self-limited with a good prognosis. Examination maneuvers such as Kernig sign or Brudzinski sign may not be useful to differentiate bacterial from aseptic meningitis because of variable sensitivity and specificity. Because clinical findings are also unreliable, the diagnosis relies on the examination of cerebrospinal fluid obtained from lumbar puncture. Delayed initiation of antibiotics can worsen mortality. Treatment should be started promptly in cases where transfer, imaging, or lumbar puncture may slow a definitive diagnosis. Empiric antibiotics should be directed toward the most likely pathogens and should be adjusted by patient age and risk factors. Dexamethasone should be administered to children and adults with suspected bacterial meningitis before or at the time of initiation of antibiotics. Vaccination against the most common pathogens that cause bacterial meningitis is recommended. Chemoprophylaxis of close contacts is helpful in preventing additional infections.
Review of Copper Provision in the Parenteral Nutrition of Adults [Formula: see text].
Livingstone, Callum
2017-04-01
The essential trace element copper (Cu) is required for a range of physiologic processes, including wound healing and functioning of the immune system. The correct amount of Cu must be provided in parenteral nutrition (PN) if deficiency and toxicity are to be avoided. While provision in line with the standard recommendations should suffice for most patients, Cu requirements may be higher in patients with increased gastrointestinal losses and severe burns and lower in those with cholestasis. The tests of Cu status that are currently available for clinical use are unreliable. Serum Cu concentration is the most commonly ordered test but is insensitive to Cu deficiency and toxicity and is misleadingly increased during the acute phase response. These limitations make it difficult for prescribers to assess Cu status and to decide how much Cu to provide. There is a need for better tests of Cu status to be developed to decrease uncertainty and improve individualization of Cu dosing. More information is needed on Cu requirements in disease and Cu contamination of PN components and other intravenous fluids. New multi-trace element products should be developed that provide Cu doses in line with the 2012 American Society for Parenteral and Enteral Nutrition recommendations. This article discusses the evaluation and treatment of Cu deficiency and toxicity in patients treated with PN.
Brooks, J P; Adeli, A; McLaughlin, M R; Miles, D M
2012-12-01
Increasing costs associated with inorganic fertilizer have led to widespread use of broiler litter. Proper land application, typically limiting nutrient loss, is essential to protect surface water. This study was designed to evaluate litter-borne microbial runoff (heterotrophic plate count bacteria, staphylococci, Escherichia coli, enterococci, and Clostridium perfringens) while applying typical nutrient-control methods. Field studies were conducted in which plots with high and low litter rates, inorganic fertilizer, AlCl(3)-treated litter, and controls were rained on five times using a rain generator. Overall, microbial runoff from poultry litter applied plots was consistently greater (2-5 log(10) plot(-1)) than controls. No appreciable effect on microbial runoff was noted from variable litter application rate or AlCl(3) treatments, though rain event, not time, significantly affected runoff load. C. perfringens and staphylococci runoff were consistently associated with poultry litter application, during early rain events, while other indicators were unreliable. Large microbial runoff pulses were observed, ranging from 10(2) to 10(10) CFU plot(-1); however, only a small fraction of litter-borne microbes were recoverable in runoff. This study indicated that microbial runoff from litter-applied plots can be substantial, and that methods intended to reduce nutrient losses do not necessarily reduce microbial runoff.
Is mandatory research ethics reviewing ethical?
Dyck, Murray; Allen, Gary
2013-08-01
Review boards responsible for vetting the ethical conduct of research have been criticised for their costliness, unreliability and inappropriate standards when evaluating some non-medical research, but the basic value of mandatory ethical review has not been questioned. When the standards that review boards use to evaluate research proposals are applied to review board practices, it is clear that review boards do not respect researchers or each other, lack merit and integrity, are not just and are not beneficent. The few benefits of mandatory ethical review come at a much greater, but mainly hidden, social cost. It is time that responsibility for the ethical conduct of research is clearly transferred to researchers, except possibly in that small proportion of cases where prospective research participants may be so intrinsically vulnerable that their well-being may need to be overseen.
Defining and evaluating perceptions of victim blame in antigay hate crimes.
Cramer, Robert J; Nobles, Matt R; Amacker, Amanda M; Dovoedo, Lisa
2013-09-01
Victimology research often hinges on attribution of blame toward victims despite a lack of conceptual agreement on the definition and measure of the construct. Drawing on established blame attribution and intent literature, the present study evaluates psychometric properties of the Perceptions of Victim Blame Scale (PVBS) using mock jury samples in a vignette-based capital murder antigay hate crime context. Factor analyses show support for a three-factor structure with the following perceptions of victim blame subscales: Malice, Recklessness, and Unreliability. All factors displayed expected positive associations with homonegativity and authoritarianism. Likewise, all factors displayed null relations with trait aggression and social desirability. Only the Malice factor predicted sentencing decisions after controlling for crime condition and support for the death penalty. Results are reviewed with respect to blame attribution theory and practical application of a revised PVBS.
Cook, Thomas D; Steiner, Peter M
2010-03-01
In this article, we note the many ontological, epistemological, and methodological similarities between how Campbell and Rubin conceptualize causation. We then explore 3 differences in their written emphases about individual case matching in observational studies. We contend that (a) Campbell places greater emphasis than Rubin on the special role of pretest measures of outcome among matching variables; (b) Campbell is more explicitly concerned with unreliability in the covariates; and (c) for analyzing the outcome, only Rubin emphasizes the advantages of using propensity score over regression methods. To explore how well these 3 factors reduce bias, we reanalyze and review within-study comparisons that contrast experimental and statistically adjusted nonexperimental causal estimates from studies with the same target population and treatment content. In this context, the choice of covariates counts most for reducing selection bias, and the pretest usually plays a special role relative to all the other covariates considered singly. Unreliability in the covariates also influences bias reduction but by less. Furthermore, propensity score and regression methods produce comparable degrees of bias reduction, though these within-study comparisons may not have met the theoretically specified conditions most likely to produce differences due to analytic method.
Danese, Elisa; Montagnana, Martina; Nouvenne, Antonio; Lippi, Giuseppe
2015-01-01
The efficient diagnosis and accurate monitoring of diabetic patients are cornerstones for reducing the risk of diabetic complications. The current diagnostic and prognostic strategies in diabetes are mainly based on two tests, plasma (or capillary) glucose and glycated hemoglobin (HbA1c). Nevertheless, these measures are not foolproof, and their clinical usefulness is biased by a number of clinical and analytical factors. The introduction of other indices of glucose homeostasis in clinical practice such as fructosamine and glycated albumin (GA) may be regarded as an attractive alternative, especially in patients in whom the measurement of HbA1c may be biased or even unreliable. These include patients with rapid changes of glucose homeostasis and larger glycemic excursions, and patients with red blood cell disorders and renal disease. According to available evidence, the overall diagnostic efficiency of GA seems superior to that of fructosamine throughout a broad range of clinical settings. The current method for measuring GA is also better standardized and less vulnerable to preanalytical variables than those used for assessing fructosamine. Additional advantages of GA over HbA1c are represented by lower reagent cost and being able to automate the GA analysis on many conventional laboratory instruments. Although further studies are needed to definitely establish that GA can complement or even replace conventional measures of glycemic control such as HbA1c, GA may help the clinical management of patients with diabetes in whom HbA1c values might be unreliable. PMID:25591856
Yarkoni, Tal
2012-01-01
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (1) open and transparent access to accumulated evaluation data, (2) personalized and highly customizable performance metrics, and (3) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting toward such models as soon as possible. PMID:23060783
Keidan, Ilan; Sidi, Avner; Ben-Menachem, Erez; Tene, Yael; Berkenstadt, Haim
2014-02-01
To determine the accuracy and precision of simultaneous noninvasive blood pressure (NIBP) measurement in the arm, forearm, and ankle in anesthetized children. Prospective, randomized study. University medical center. 101 ASA physical status 1 and 2 children (aged 1-8 yrs) scheduled for elective surgery with general anesthesia. Simultaneous NIBP measurements were recorded at the arm, forearm, and ankle at 5-minute intervals. The systolic blood pressure difference between the arm-forearm or the arm-ankle was within the ± 10% range in 63% and 29% of measurements, and within the ± 20% range in 85% and 67% of measurements, respectively. The diastolic blood pressure difference between the arm-forearm or the arm-ankle was within the ± 10% range in 42% and 44% and within the ± 20% range in 67% and 74% of measurements, respectively. In patients in whom the initial three NIBP measurements were within the ± 20% range between the forearm and arm, 86% of the subsequent measurements were also within that limit. Forearm and ankle NIBP measurements are unreliable and inconsistent with NIBP measured in the arm of anesthetized children. These alternative BP measurement sites are not reliable in accuracy (comparison with reference "gold" standard) and precision (reproducibility). Copyright © 2014 Elsevier Inc. All rights reserved.
Spatial cue reliability drives frequency tuning in the barn Owl's midbrain
Cazettes, Fanny; Fischer, Brian J; Pena, Jose L
2014-01-01
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability. DOI: http://dx.doi.org/10.7554/eLife.04854.001 PMID:25531067
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
Nectar quality perception by honey bees (Apis mellifera ligustica).
Sanderson, Charlotte E; Cook, Peyton; Hill, Peggy S M; Orozco, Benjamin S; Abramson, Charles I; Wells, Harrington
2013-11-01
In exploring how foragers perceive rewards, we often find that well-motivated individuals are not too choosy and unmotivated individuals are unreliable and inconsistent. Nevertheless, when given a choice we see that individuals can clearly distinguish between rewards. Here we develop the logic of using responses to two-choice problems as a derivative function of perceived reward, and utilize this model to examine honey bee perception of nectar quality. Measuring the derivative allows us to deduce the perceived reward function. The derivative function of the perceived reward equation gives the rate of change of the reward perceived for each reward value. This approach depends on presenting free-flying foragers with a series of two different rewards presented simultaneously (i.e., two-choice, binomial tests). We also examine how honey bees integrate information from a range of reward qualities to formulate a functional response. Results suggest that honey bees overestimate higher quality rewards and that direct comparison is an important step in the integration of information from a range of rewards.
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Silverman, Debra T.; Malats, Núria; Tardon, Adonina; Garcia-Closas, Reina; Serra, Consol; Carrato, Alfredo; Fortuny, Joan; Rothman, Nathaniel; Dosemeci, Mustafa; Kogevinas, Manolis
2009-01-01
The authors evaluated potential determinants of the quality of the interview in a case-control study of bladder cancer and assessed the effect of the interview quality on the risk estimates. The analysis included 1,219 incident bladder cancer cases and 1,271 controls recruited in Spain in 1998–2001. Information on etiologic factors for bladder cancer was collected through personal interviews, which were scored as unsatisfactory, questionable, reliable, or high quality by the interviewers. Eight percent of the interviews were unsatisfactory or questionable. Increasing age, lower socioeconomic status, and poorer self-perceived health led to higher proportions of questionable or unreliable interviews. The odds ratio for cigarette smoking, the main risk factor for bladder cancer, was 6.18 (95% confidence interval: 4.56, 8.39) overall, 3.20 (95% confidence interval: 1.13, 9.04) among unsatisfactory or questionable interviews, 6.86 (95% confidence interval: 4.80, 9.82) among reliable interviews, and 7.70 (95% confidence interval: 3.64, 16.30) among high-quality interviews. Similar trends were observed for employment in high-risk occupations, drinking water containing elevated levels of trihalomethanes, and use of analgesics. Higher quality interviews led to stronger associations compared with risk estimation that did not take the quality of interview into account. The collection of quality of interview scores and the exclusion of unreliable interviews probably reduce misclassification of exposure in observational studies. PMID:19478234
Strength of Intentional Effort Enhances the Sense of Agency
Minohara, Rin; Wen, Wen; Hamasaki, Shunsuke; Maeda, Takaki; Kato, Motoichiro; Yamakawa, Hiroshi; Yamashita, Atsushi; Asama, Hajime
2016-01-01
Sense of agency (SoA) refers to the feeling of controlling one’s own actions, and the experience of controlling external events with one’s actions. The present study examined the effect of strength of intentional effort on SoA. We manipulated the strength of intentional effort using three types of buttons that differed in the amount of force required to depress them. We used a self-attribution task as an explicit measure of SoA. The results indicate that strength of intentional effort enhanced self-attribution when action-effect congruency was unreliable. We concluded that intentional effort importantly affects the integration of multiple cues affecting explicit judgments of agency when the causal relationship action and effect was unreliable. PMID:27536267
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
What to Do With "Moderate" Reliability and Validity Coefficients?
Post, Marcel W
2016-07-01
Clinimetric studies may use criteria for test-retest reliability and convergent validity such that correlation coefficients as low as .40 are supportive of reliability and validity. It can be argued that moderate (.40-.60) correlations should not be interpreted in this way and that reliability coefficients <.70 should be considered as indicative of unreliability. Convergent validity coefficients in the .40 to .60 or .40 to .70 range should be considered as indications of validity problems, or as inconclusive at best. Studies on reliability and convergent should be designed in such a way that it is realistic to expect high reliability and validity coefficients. Multitrait multimethod approaches are preferred to study construct (convergent-divergent) validity. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ARTiiFACT: a tool for heart rate artifact processing and heart rate variability analysis.
Kaufmann, Tobias; Sütterlin, Stefan; Schulz, Stefan M; Vögele, Claus
2011-12-01
The importance of appropriate handling of artifacts in interbeat interval (IBI) data must not be underestimated. Even a single artifact may cause unreliable heart rate variability (HRV) results. Thus, a robust artifact detection algorithm and the option for manual intervention by the researcher form key components for confident HRV analysis. Here, we present ARTiiFACT, a software tool for processing electrocardiogram and IBI data. Both automated and manual artifact detection and correction are available in a graphical user interface. In addition, ARTiiFACT includes time- and frequency-based HRV analyses and descriptive statistics, thus offering the basic tools for HRV analysis. Notably, all program steps can be executed separately and allow for data export, thus offering high flexibility and interoperability with a whole range of applications.
Valente, Mariana; Araújo, Ana; Esteves, Tiago; Laundos, Tiago L; Freire, Ana G; Quelhas, Pedro; Pinto-do-Ó, Perpétua; Nascimento, Diana S
2015-12-02
Cardiac therapies are commonly tested preclinically in small-animal models of myocardial infarction. Following functional evaluation, post-mortem histological analysis is essential to assess morphological and molecular alterations underlying the effectiveness of treatment. However, non-methodical and inadequate sampling of the left ventricle often leads to misinterpretations and variability, making direct study comparisons unreliable. Protocols are provided for representative sampling of the ischemic mouse heart followed by morphometric analysis of the left ventricle. Extending the use of this sampling to other types of in situ analysis is also illustrated through the assessment of neovascularization and cellular engraftment in a cell-based therapy setting. This is of interest to the general cardiovascular research community as it details methods for standardization and simplification of histo-morphometric evaluation of emergent heart therapies. © 2015 by John Wiley & Sons, Inc. Copyright © 2015 John Wiley & Sons, Inc.
Reliability of Craniofacial Superimposition Using Three-Dimension Skull Model.
Gaudio, Daniel; Olivieri, Lara; De Angelis, Danilo; Poppa, Pasquale; Galassi, Andrea; Cattaneo, Cristina
2016-01-01
Craniofacial superimposition is a technique potentially useful for the identification of unidentified human remains if a photo of the missing person is available. We have tested the reliability of the 2D-3D computer-aided nonautomatic superimposition techniques. Three-dimension laser scans of five skulls and ten photographs were overlaid with an imaging software. The resulting superimpositions were evaluated using three methods: craniofacial landmarks, morphological features, and a combination of the two. A 3D model of each skull without its mandible was tested for superimposition; we also evaluated whether separating skulls by sex would increase correct identifications. Results show that the landmark method employing the entire skull is the more reliable one (5/5 correct identifications, 40% false positives [FP]), regardless of sex. However, the persistence of a high percentage of FP in all the methods evaluated indicates that these methods are unreliable for positive identification although the landmark-only method could be useful for exclusion. © 2015 American Academy of Forensic Sciences.
18 CFR 806.23 - Standards for water withdrawals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of groundwater or stream flow levels; rendering competing supplies unreliable; affecting other water... reasonably foreseeable water needs from available groundwater or surface water without limitation: (i...
18 CFR 806.23 - Standards for water withdrawals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of groundwater or stream flow levels; rendering competing supplies unreliable; affecting other water... reasonably foreseeable water needs from available groundwater or surface water without limitation: (i...
18 CFR 806.23 - Standards for water withdrawals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... of groundwater or stream flow levels; rendering competing supplies unreliable; affecting other water... reasonably foreseeable water needs from available groundwater or surface water without limitation: (i...
Performance Assessment of Refractory Concrete Used on the Space Shuttle's Launch Pad
NASA Technical Reports Server (NTRS)
Trejo, David; Calle, Luz Marina; Halman, Ceki
2005-01-01
The John F. Kennedy Space Center (KSC) maintains several facilities for launching space vehicles. During recent launches it has been observed that the refractory concrete materials that protect the steel-framed flame duct are breaking away from this base structure and are being projected at high velocities. There is significant concern that these projected pieces can strike the launch complex or space vehicle during the launch, jeopardizing the safety of the mission. A qualification program is in place to evaluate the performance of different refractory concretes and data from these tests have been used to assess the performance of the refractory concretes. However, there is significant variation in the test results, possibly making the existing qualification test program unreliable. This paper will evaluate data from past qualification tests, identify potential key performance indicators for the launch complex, and will recommend a new qualification test program that can be used to better qualify refractory concrete.
Blank, Fidela S J; Miller, Moses; Nichols, James; Smithline, Howard; Crabb, Gillian; Pekow, Penelope
2009-04-01
The purpose of this study is to compare blood glucose levels measured by a point of care (POC) device to laboratory measurement using the same sample venous blood from patients with suspected diabetic ketoacidosis (DKA). A descriptive correlational design was used for this IRB-approved quality assurance project. The study site was the 50-bed BMC emergency department (ED) which has an annual census of over 100,000 patient visits. The convenience sample consisted of 54 blood samples from suspected DKA patients with orders for hourly blood draws for glucose measurement. Spearman correlations of the glucose POC values, reference lab values, and differences between the two, were evaluated. A chi-square test was used to evaluate the association between the acidosis status and FDA acceptability of POC values. Patient age range was 10-86 years; 63% were females; 46% had a final diagnosis of DKA. POC values underestimated glucose levels 93% of the time. There was a high correlation between the lab value and the magnitude of the difference, (lab minus POC value) indicating that the higher the true glucose value, the greater the difference between the lab and the POC value. A chi-square test showed no overall association between acidosis and FDA-acceptability. The POC values underestimated lab reported glucose levels in 50 of 54 cases even with the use of same venous sample sent to the lab, which make it highly unreliable for use in monitoring suspected DKA patients.
Takeshita, Kazutaka; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko
2016-01-01
Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered. PMID:27711181
Geohazards and Poverty: An Ecosystem Services Approach in Bangladesh
NASA Astrophysics Data System (ADS)
Hutton, C.; Nicholls, R. J.; Lazar, A.
2014-12-01
The Ecosystem Services (ES) of river deltas often support high population densities, estimated at over 500 million people globally, with particular concentrations in South, South-East and East Asia and Africa. Further, a large proportion of delta populations experience extremes of poverty and are highly vulnerable to the environmental and ecological stress and degradation that is occurring. A systems dynamics approach is adopted to provide policy makers with the knowledge and tools to enable them to evaluate the effects of Geohazards and environmental stressors and associated policy decisions on people's livelihoods (Ecosystem Services for Poverty Alleviation - ESPA Deltas). This is done by a multidisciplinary and multi-national team of policy analysts, social and natural scientists and engineers. The work presents a participatory approach to formally evaluating ecosystem services and poverty in the context of the wide range of environmetnal stressors and hazards. These changes include subsidence and sea level rise, land degradation and population pressure in delta regions. The approach will be developed, tested and applied in coastal Bangladesh. Rural livelihoods are inextricably linked with the natural ecosystems and low income farmers are highly vulnerable to changes in ecosystem services as they are impacted by geohazards and environmental stressors. Their health, wellbeing and financial security are under threat from many directions such as unreliable supplies of clean water, increasing salinisation of soils and flood, while in the longer term they are threatened by subsidence and sea-level rise. This study will contribute to the understanding of this present vulnerability and help the people who develop the relevant policy to make more informed choices about how best to reduce this vulnerability.
Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques
2008-10-01
Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.
Sources of medicine information and their reliability evaluated by medicine users.
Närhi, Ulla
2007-12-01
To study the medicine users' sources of medicine information and the perceived reliability of these sources in different age groups. A computer-aided telephone interview (CATI) to Finnish consumers (n = 1,004). Those respondents (n = 714) who reported using any prescription or self-medication medicines more than once a month were included in the study. The respondents were interviewed about their use of sources of medicine information during the previous 6 months. The reliability of sources in different age groups was estimated using a 4-point scale: very reliable, somewhat reliable, somewhat unreliable and very unreliable. The respondents also had the option of being unable to make an appraisal. A proportion of respondents reporting using the source, number of mentioned sources and their reliability evaluated by respondents. About half of the respondents in each age group mentioned two to four sources. The most common sources of information were Patient Information Leaflets (PILs) (74%), doctors (68%) and pharmacists (60%). Next came television (40%), newspapers and magazines (40%), drug advertisements (32%), nurses (28%), drug information leaflets (27%), relatives and friends (24%), medicine guides and books (22%) and the Internet (20%). There was a significant difference between age groups in reporting the Internet as a source of medicine information (15-34-year-old respondents reported the greatest Internet use). The three most reliable sources in every age group were reported to be PILs, doctors and pharmacists. Nurses, drug regulatory authorities, drug information leaflets and medicine guides and books were considered next most reliable. Relatives and friends, television, newspapers and magazines were considered the least reliable. The respondents were most uncertain about the reliability of the Internet, patient organisations and telephone services. There was a significant difference between age groups in evaluating the reliability of telephone services (15-34-year-olds found them more reliable). Medicine users reported receiving medicine information from many sources. The most commonly used sources were perceived as the most reliable, but their reliability did not seem to depend on age. The counsellors should take into account that patients have many sources of medicine information, with varying validity.
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humberto E. Garcia
This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicativemore » of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system-centric strategy that utilizes data collected from a system of sensors and that effectively exploits known characterizations of sensors and facility operations in order to significantly improve anomaly detection, reduce false alarm, and enhance assessment robustness under unreliable partial sensor information.« less
Quality of online health information about oral contraceptives from Hebrew-language websites.
Neumark, Yehuda; Flum, Lior; Lopez-Quintero, Catalina; Shtarkshall, Ronny
2012-09-24
The Internet is a frequently used source of health information. Adolescents in particular seem to be receptive to online health information (OHI) and often incorporate such information in their decision-making processes. Yet, OHI is often incomplete, inaccurate, or unreliable. This study assessed the quality of Hebrew online (non-user-generated) content on oral contraceptives (OC), with regard to accuracy/completeness, credibility, and usability. Twenty-nine websites in Hebrew, including those of the four Israeli HMOs, were identified and evaluated. The websites were categorized as: HMO, health portal, contraception-specific, promotional-commercial, and life style and women's health. A set of established content parameters was selected by a family planning expert to assess accuracy/completeness. The Health on the Net Foundation Code of Conduct (HONcode) principles were used to assess the websites' reliability. Usability was assessed by applying items selected from the Minervation Validation and the University of Michigan's 'Website Evaluation checklist' scale. Mean scores, standard deviations (SD), and ranges were calculated for all websites and for category-specific websites. Correlation between dimensions and Inter-rater reliability were also examined. The mean score for accuracy/completeness was 50.9% for all websites (SD=30.1%, range 8-100%). Many websites failed to provide complete information, or provided inaccurate information regarding what to do when a pill is missed and when to use back-up methods. The average credibility score for all websites was 70.6% (SD=15.1, range=38=98%). The credibility parameters that were most commonly absent were funding source, authoring, date of content creation and last modification, explicit reference to evidence-based information, and references and citations. The average usability score for all websites was 94.5% (SD=6.9%, range 79-100%). A weak correlation was found between the three quality parameters assessed. Wide variation was noted in the quality of Hebrew-language OC websites. HMOs' websites scored highest on credibility and usability, and contraceptive-specific websites exhibited the greatest accuracy/completeness. The findings highlight the need to establish quality guidelines for health website content, train health care providers in assisting their patients to seek high quality OHI, and strengthen e-health literacy skills among online-information seekers, including perhaps health professionals.
Quality of online health information about oral contraceptives from Hebrew-language websites
2012-01-01
Background The Internet is a frequently used source of health information. Adolescents in particular seem to be receptive to online health information (OHI) and often incorporate such information in their decision-making processes. Yet, OHI is often incomplete, inaccurate, or unreliable. This study assessed the quality of Hebrew online (non-user-generated) content on oral contraceptives (OC), with regard to accuracy/completeness, credibility, and usability. Methods Twenty-nine websites in Hebrew, including those of the four Israeli HMOs, were identified and evaluated. The websites were categorized as: HMO, health portal, contraception-specific, promotional-commercial, and life style and women’s health. A set of established content parameters was selected by a family planning expert to assess accuracy/completeness. The Health on the Net Foundation Code of Conduct (HONcode) principles were used to assess the websites’ reliability. Usability was assessed by applying items selected from the Minervation Validation and the University of Michigan’s ′Website Evaluation checklist′ scale. Mean scores, standard deviations (SD), and ranges were calculated for all websites and for category-specific websites. Correlation between dimensions and Inter-rater reliability were also examined. Results The mean score for accuracy/completeness was 50.9% for all websites (SD=30.1%, range 8–100%). Many websites failed to provide complete information, or provided inaccurate information regarding what to do when a pill is missed and when to use back–up methods. The average credibility score for all websites was 70.6% (SD=15.1, range=38=98%). The credibility parameters that were most commonly absent were funding source, authoring, date of content creation and last modification, explicit reference to evidence-based information, and references and citations. The average usability score for all websites was 94.5% (SD=6.9%, range 79–100%). A weak correlation was found between the three quality parameters assessed. Conclusions Wide variation was noted in the quality of Hebrew-language OC websites. HMOs’ websites scored highest on credibility and usability, and contraceptive-specific websites exhibited the greatest accuracy/completeness. The findings highlight the need to establish quality guidelines for health website content, train health care providers in assisting their patients to seek high quality OHI, and strengthen e-health literacy skills among online-information seekers, including perhaps health professionals. PMID:23006798
Modern status of photonuclear data
NASA Astrophysics Data System (ADS)
Varlamov, V. V.; Ishkhanov, B. S.
2017-09-01
The reliability of experimental cross sections obtained for (γ, 1 n), (γ, 2 n), and (γ, 3 n) partial photoneutron reactions using beams of quasimonoenergetic annihilation photons and bremsstrahlung is analyzed by employing data for a large number of medium-heavy and heavy nuclei, including those of 63,65Cu, 80Se, 90,91,94Zr, 115In, 112-124Sn, 133Cs, 138Ba, 159Tb, 181Ta, 186-192Os, 197Au, 208Pb, and 209Bi. The ratios of the cross sections of definite partial reactions to the cross section of the neutron-yield reaction, F i = σ(γ, in)/ σ(γ, xn), are used as criteria of experimental-data reliability. By definition, positive values of these ratios should not exceed the upper limits of 1.00, 0.50, 0.33,... for i = 1, 2, 3,..., respectively. For many nuclei, unreliable values of the above ratios were found to correlate clearly in various photon-energy regions F i with physically forbidden negative values of cross sections of partial reactions. On this basis, one can conclude that correspondent experimental data are unreliable. Significant systematic uncertainties of the methods used to determine photoneutron multiplicity are shown to be the main reason for this. New partial-reaction cross sections that satisfy the above data-reliability criteria were evaluated within an experimental-theoretical method [ σ eval(γ, in) = F i theor (γ, in) × σ expt(γ, xn)] by employing the ratios F i theor (γ, in) calculated on the basis of a combined photonuclear-reaction model. It was obtained that cross sections evaluated in this way deviate substantially from the results of many experiments performed via neutron-multiplicity sorting, but, at the same time, agree with the results of alternative activation experiments. Prospects of employing methods that would provide, without recourse to photoneutron-multiplicity sorting, reliable data on cross sections of partial photoneutron reactions are discussed.
Miller, Patrick J O; Samarra, Filipa I P; Perthuison, Aurélie D
2007-06-01
This study investigates how particular received spectral characteristics of stereotyped calls of sexually dimorphic adult killer whales may be influenced by caller sex, orientation, and range. Calls were ascribed to individuals during natural behavior using a towed beamforming array. The fundamental frequency of both high-frequency and low-frequency components did not differ consistently by sex. The ratio of peak energy within the fundamental of the high-frequency component relative to summed peak energy in the first two low-frequency component harmonics, and the number of modulation bands off the high-frequency component, were significantly greater when whales were oriented towards the array, while range and adult sex had little effect. In contrast, the ratio of peak energy in the first versus second harmonics of the low-frequency component was greater in calls produced by adult females than adult males, while orientation and range had little effect. The dispersion of energy across harmonics has been shown to relate to body size or sex in terrestrial species, but pressure effects during diving are thought to make such a signal unreliable in diving animals. The observed spectral differences by signaler sex and orientation suggest that these types of information may be transmitted acoustically by freely diving killer whales.
A Black-Scholes Approach to Satisfying the Demand in a Failure-Prone Manufacturing System
NASA Technical Reports Server (NTRS)
Chavez-Fuentes, Jorge R.; Gonzalex, Oscar R.; Gray, W. Steven
2007-01-01
The goal of this paper is to use a financial model and a hedging strategy in a systems application. In particular, the classical Black-Scholes model, which was developed in 1973 to find the fair price of a financial contract, is adapted to satisfy an uncertain demand in a manufacturing system when one of two production machines is unreliable. This financial model together with a hedging strategy are used to develop a closed formula for the production strategies of each machine. The strategy guarantees that the uncertain demand will be met in probability at the final time of the production process. It is assumed that the production efficiency of the unreliable machine can be modeled as a continuous-time stochastic process. Two simple examples illustrate the result.
Beyond Open Big Data: Addressing Unreliable Research
Moseley, Edward T; Hsu, Douglas J; Stone, David J
2014-01-01
The National Institute of Health invests US $30.9 billion annually in medical research. However, the subsequent impact of this research output on society and the economy is amplified dramatically as a result of the actual medical treatments, biomedical innovations, and various commercial enterprises that emanate from and depend on these findings. It is therefore a great concern to discover that much of published research is unreliable. We propose extending the open data concept to the culture of the scientific research community. By dialing down unproductive features of secrecy and competition, while ramping up cooperation and transparency, we make a case that what is published would then be less susceptible to the sometimes corrupting and confounding pressures to be first or journalistically attractive, which can compromise the more fundamental need to be robustly correct. PMID:25405277
NASA Astrophysics Data System (ADS)
Tindall, Julia C.; Haywood, Alan M.; Thirumalai, Kaustubh
2017-08-01
The El Niño-Southern Oscillation (ENSO) drives interannual climate variability; hence, its behavior over a range of climates needs to be understood. It is therefore important to verify that the paleoarchives, used for preinstrumental ENSO studies, can accurately record ENSO signals. Here we use the isotope enabled Hadley Centre General Circulation Model, HadCM3, to investigate ENSO signals in paleoarchives from a warm paleoclimate, the mid-Pliocene Warm Period (mPWP: 3.3-3.0 Ma). Continuous (e.g., coral) and discrete (e.g., foraminifera) proxy data are simulated throughout the tropical Pacific, and ENSO events suggested by the pseudoproxy data are assessed using modeled ENSO indices. HadCM3 suggests that the ability to reconstruct ENSO from coral data is predominantly dependent on location. However, since modeled ENSO is slightly stronger in the mPWP than the preindustrial, ENSO is slightly easier to detect in mPWP aged coral. HadCM3 also suggests that using statistics from a number of individual foraminifera (individual foraminifera analysis, IFA) generally provides more accurate ENSO information for the mPWP than for the preindustrial, particularly in the western and central Pacific. However, a test case from the eastern Pacific showed that for some locations, the IFA method can work well for the preindustrial but be unreliable for a different climate. The work highlights that sites used for paleo-ENSO analysis should be chosen with extreme care in order to avoid unreliable results. Although a site with good skill for preindustrial ENSO will usually have good skill for assessing mPWP ENSO, this is not always the case.
Optimal Dispatch of Unreliable Electric Grid-Connected Diesel Generator-Battery Power Systems
NASA Astrophysics Data System (ADS)
Xu, D.; Kang, L.
2015-06-01
Diesel generator (DG)-battery power systems are often adopted by telecom operators, especially in semi-urban and rural areas of developing countries. Unreliable electric grids (UEG), which have frequent and lengthy outages, are peculiar to these regions. DG-UEG-battery power system is an important kind of hybrid power system. System dispatch is one of the key factors to hybrid power system integration. In this paper, the system dispatch of a DG-UEG-lead acid battery power system is studied with the UEG of relatively ample electricity in Central African Republic (CAR) and UEG of poor electricity in Congo Republic (CR). The mathematical models of the power system and the UEG are studied for completing the system operation simulation program. The net present cost (NPC) of the power system is the main evaluation index. The state of charge (SOC) set points and battery bank charging current are the optimization variables. For the UEG in CAR, the optimal dispatch solution is SOC start and stop points 0.4 and 0.5 that belong to the Micro-Cycling strategy and charging current 0.1 C. For the UEG in CR, the optimal dispatch solution is of 0.1 and 0.8 that belongs to the Cycle-Charging strategy and 0.1 C. Charging current 0.1 C is suitable for both grid scenarios compared to 0.2 C. It makes the dispatch strategy design easier in commercial practices that there are a few very good candidate dispatch solutions with system NPC values close to that of the optimal solution for both UEG scenarios in CAR and CR.
Thought-based row-column scanning communication board for individuals with cerebral palsy.
Scherer, Reinhold; Billinger, Martin; Wagner, Johanna; Schwarz, Andreas; Hettich, Dirk Tassilo; Bolinger, Elaina; Lloria Garcia, Mariano; Navarro, Juan; Müller-Putz, Gernot
2015-02-01
Impairment of an individual's ability to communicate is a major hurdle for active participation in education and social life. A lot of individuals with cerebral palsy (CP) have normal intelligence, however, due to their inability to communicate, they fall behind. Non-invasive electroencephalogram (EEG) based brain-computer interfaces (BCIs) have been proposed as potential assistive devices for individuals with CP. BCIs translate brain signals directly into action. Motor activity is no longer required. However, translation of EEG signals may be unreliable and requires months of training. Moreover, individuals with CP may exhibit high levels of spontaneous and uncontrolled movement, which has a large impact on EEG signal quality and results in incorrect translations. We introduce a novel thought-based row-column scanning communication board that was developed following user-centered design principles. Key features include an automatic online artifact reduction method and an evidence accumulation procedure for decision making. The latter allows robust decision making with unreliable BCI input. Fourteen users with CP participated in a supporting online study and helped to evaluate the performance of the developed system. Users were asked to select target items with the row-column scanning communication board. The results suggest that seven among eleven remaining users performed better than chance and were consequently able to communicate by using the developed system. Three users were excluded because of insufficient EEG signal quality. These results are very encouraging and represent a good foundation for the development of real-world BCI-based communication devices for users with CP. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Cumulative False-Positive QuantiFERON-TB Interferon-γ Release Assay Results.
Gamsky, Thomas E; Lum, Thomas; Hung-Fan, Melody; Green, Jon A
2016-05-01
Despite reports of unreliability, the QuantiFERON-TB interferon-γ release assay is increasingly used for the annual screening of individuals at risk for latent tuberculosis. Continued use of the QuantiFERON-TB assay suggests the need for more definitive evidence of its reproducibility and accuracy. To examine reproducibility and the accumulation of false-positive test results when the QuantiFERON-TB is repeated annually and to examine the validity of confirming positive test results with the performance of a second QuantiFERON-TB. We performed a retrospective, longitudinal evaluation of results from serial screening of a cohort of emergency responders from 2001 to 2013. Results of tuberculin tests and QuantiFERON-TB tests performed annually as part of a mandated first responder examination were retroactively reviewed. In this population, positive results occurred in new individuals each year. QuantiFERON-TB results were positive in 80 of 557 tuberculin test-negative individuals examined annually for a maximum of 7 years. Only 10 individuals with initially positive results remained positive when the test was repeated the next year, and 9 of these 10 were QuantiFERON-TB-negative within 3 years. The number of individuals with a positive result increased annually, and, after 7 years, 32 (27.4%) of 117 people had a positive result. When viewed in the context of the extensive literature documenting unreliable QuantiFERON-TB test performance, our findings of frequent, cumulative, sporadic, and irreproducible positive results support discontinuing the use of the QuantiFERON-TB assay for the diagnosis of latent tuberculosis in low-risk populations.
Improving the quantification of contrast enhanced ultrasound using a Bayesian approach
NASA Astrophysics Data System (ADS)
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-03-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
The poleward migration of the location of tropical cyclone maximum intensity.
Kossin, James P; Emanuel, Kerry A; Vecchi, Gabriel A
2014-05-15
Temporally inconsistent and potentially unreliable global historical data hinder the detection of trends in tropical cyclone activity. This limits our confidence in evaluating proposed linkages between observed trends in tropical cyclones and in the environment. Here we mitigate this difficulty by focusing on a metric that is comparatively insensitive to past data uncertainty, and identify a pronounced poleward migration in the average latitude at which tropical cyclones have achieved their lifetime-maximum intensity over the past 30 years. The poleward trends are evident in the global historical data in both the Northern and the Southern hemispheres, with rates of 53 and 62 kilometres per decade, respectively, and are statistically significant. When considered together, the trends in each hemisphere depict a global-average migration of tropical cyclone activity away from the tropics at a rate of about one degree of latitude per decade, which lies within the range of estimates of the observed expansion of the tropics over the same period. The global migration remains evident and statistically significant under a formal data homogenization procedure, and is unlikely to be a data artefact. The migration away from the tropics is apparently linked to marked changes in the mean meridional structure of environmental vertical wind shear and potential intensity, and can plausibly be linked to tropical expansion, which is thought to have anthropogenic contributions.
Reilly, John; Glisic, Branko
2018-01-01
Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature–deformation–displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University. PMID:29494496
Reilly, John; Glisic, Branko
2018-03-01
Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature-deformation-displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.
Qiu, Yihong; Li, Xia; Duan, John Z
2014-02-01
The present study examines how drug's inherent properties and product design influence the evaluation and applications of in vitro-in vivo correlation (IVIVC) for modified-release (MR) dosage forms consisting of extended-release (ER) and immediate-release (IR) components with bimodal drug release. Three analgesic drugs were used as model compounds, and simulations of in vivo pharmacokinetic profiles were conducted using different release rates of the ER component and various IR percentages. Plasma concentration-time profiles exhibiting a wide range of tmax and maximum observed plasma concentration (Cmax) were obtained from superposition of the simulated IR and ER profiles based on a linear IVIVC. It was found that depending on the drug and dosage form design, direct use of the superposed IR and ER data for IVIVC modeling and prediction may (1) be acceptable within errors, (2) become unreliable and less meaningful because of the confounding effect from the non-negligible IR contribution to Cmax, or (3) be meaningless because of the insensitivity of Cmax to release rate change of the ER component. Therefore, understanding the drug, design and drug release characteristics of the product is essential for assessing the validity, accuracy, and reliability of IVIVC of complex MR products obtained via directly modeling of in vivo data. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
NASA Technical Reports Server (NTRS)
Basu, J. P. (Principal Investigator); Dragich, S. M.; Mcguigan, D. P.
1978-01-01
The author has identified the following significant results. The stratification procedure in the new sampling strategy for LACIE included: (1) correlation test results indicating that an agrophysical stratum may be homogeneous with respect to agricultural density, but not with respect to wheat density; and (2) agrophysical unit homogeneity test results indicating that with respect to agricultural density many agrophysical units are not homogeneous, but removal of one or more refined strata from any such current agrophysical unit can make the strata homogeneous. The apportioning procedure results indicated that the current procedure is not performing well and that the apportioned estimates of refined strata wheat area are often unreliable.
Robust registration in case of different scaling
NASA Astrophysics Data System (ADS)
Gluhchev, Georgi J.; Shalev, Shlomo
1993-09-01
The problem of robust registration in the case of anisotropic scaling has been investigated. Registration of two images using corresponding sets of fiducial points is sensitive to inaccuracies in point placement due to poor image quality or non-rigid distortions, including possible out-of-plane rotations. An approach aimed at the detection of the most unreliable points has been developed. It is based on the a priori knowledge of the sequential ordering of rotation and scaling. A measure of guilt derived from the anomalous geometric relationships is introduced. A heuristic decision rule allowing for deletion of the most guilty points is proposed. The approach allows for more precise evaluation of the translation vector. It has been tested on phantom images with known parameters and has shown satisfactory results.
Fully automated adipose tissue measurement on abdominal CT
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Sussman, Daniel L.; Summers, Ronald M.
2011-03-01
Obesity has become widespread in America and has been associated as a risk factor for many illnesses. Adipose tissue (AT) content, especially visceral AT (VAT), is an important indicator for risks of many disorders, including heart disease and diabetes. Measuring adipose tissue (AT) with traditional means is often unreliable and inaccurate. CT provides a means to measure AT accurately and consistently. We present a fully automated method to segment and measure abdominal AT in CT. Our method integrates image preprocessing which attempts to correct for image artifacts and inhomogeneities. We use fuzzy cmeans to cluster AT regions and active contour models to separate subcutaneous and visceral AT. We tested our method on 50 abdominal CT scans and evaluated the correlations between several measurements.
Unreliability of modified inguinal lymphadenectomy for clinical staging of penile carcinoma.
Lopes, A; Rossi, B M; Fonseca, F P; Morini, S
1996-05-15
In 1988, Catalona proposed a modified bilateral inguinal lymphadenectomy for staging of lymph node metastasis from penile carcinoma. All three patients with penile carcinoma submitted to this procedure and without histologically confirmed metastases were free of disease within a mean follow-up time of 14.6 months. In a prospective study, the authors evaluated thirteen patients staged by the TNM system and submitted to modified bilateral inguinal lymphadenectomy. None of the patients had histologic metastases in the medial quadrant lymph nodes. Two of these patients developed regional lymph node metastases within 13.2 months (mean follow-up time). Catalona's procedure was not reliable. We therefore recommend standard inguinal lymphadenectomy as the minimal treatment for patients with infiltrating carcinoma of the penis.
Rello, Luis; Aramendía, Maite; Belarra, Miguel A; Resano, Martín
2015-01-01
DBS have become a clinical specimen especially adequate for establishing home-based collection protocols. In this work, high-resolution continuum source graphite furnace atomic absorption spectrometry is evaluated for the direct monitoring of Pb in DBS, both as a quantitative tool and a screening method. The development of the screening model is based on the establishment of the unreliability region around the threshold limits, 100 or 50 μg l(-1). More than 500 samples were analyzed to validate the model. The screening method demonstrated high sensitivity (the rate of true positives detected was always higher than 95%), an excellent LOD (1 µg l(-1)) and high throughput (10 min per sample).
The condition-dependent transcriptional network in Escherichia coli.
Lemmens, Karen; De Bie, Tijl; Dhollander, Thomas; Monsieurs, Pieter; De Moor, Bart; Collado-Vides, Julio; Engelen, Kristof; Marchal, Kathleen
2009-03-01
Thanks to the availability of high-throughput omics data, bioinformatics approaches are able to hypothesize thus-far undocumented genetic interactions. However, due to the amount of noise in these data, inferences based on a single data source are often unreliable. A popular approach to overcome this problem is to integrate different data sources. In this study, we describe DISTILLER, a novel framework for data integration that simultaneously analyzes microarray and motif information to find modules that consist of genes that are co-expressed in a subset of conditions, and their corresponding regulators. By applying our method on publicly available data, we evaluated the condition-specific transcriptional network of Escherichia coli. DISTILLER confirmed 62% of 736 interactions described in RegulonDB, and 278 novel interactions were predicted.
The complexity of hair/blood mercury concentration ratios and its implications.
Liberda, Eric N; Tsuji, Leonard J S; Martin, Ian D; Ayotte, Pierre; Dewailly, Eric; Nieboer, Evert
2014-10-01
The World Health Organization (WHO) recommends a mercury (Hg) hair-to-blood ratio of 250 for the conversion of Hg hair levels to those in whole blood. This encouraged the selection of hair as the preferred analyte because it minimizes collection, storage, and transportation issues. In spite of these advantages, there is concern about inherent uncertainties in the use of this ratio. To evaluate the appropriateness of the WHO ratio, we investigated total hair and total blood Hg concentrations in 1333 individuals from 9 First Nations (Aboriginal) communities in northern Québec, Canada. We grouped participants by sex, age, and community and performed a 3-factor (M)ANOVA for total Hg in hair (0-2 cm), total Hg in blood, and their ratio. In addition, we calculated the percent error associated with the use of the WHO ratio in predicting blood Hg concentrations from hair Hg. For group comparisons, Estimated Marginal Means (EMMS) were calculated following ANOVA. At the community level, the error in blood Hg estimated from hair Hg ranged -25% to +24%. Systematic underestimation (-8.4%) occurred for females and overestimation for males (+5.8%). At the individual level, the corresponding error range was -98.7% to 1040%, with observed hair-to-blood ratios spanning 3 to 2845. The application of the ratio endorsed by the WHO would be unreliable for determining individual follow-up. We propose that Hg exposure be assessed by blood measurements when there are human health concerns, and that the singular use of hair and the hair-to-blood concentration conversion be discouraged in establishing individual risk. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
Simon, Jacob C; A Lucas, Seth; Lee, Robert C; Darling, Cynthia L; Staninec, Michal; Vaderhobli, Ram; Pelzner, Roger; Fried, Daniel
2016-04-01
Current clinical methods for diagnosing secondary caries are unreliable for identifying the early stages of decay around restorative materials. The objective of this study was to access the integrity of restoration margins in natural teeth using near-infrared (NIR) reflectance and transillumination images at wavelengths between 1300 and 1700-nm and to determine the optimal NIR wavelengths for discriminating composite materials from dental hard tissues. Twelve composite margins (n=12) consisting of class I, II and V restorations were chosen from ten extracted teeth. The samples were imaged in vitro using NIR transillumination and reflectance, polarization sensitive optical coherence tomography (PS-OCT) and a high-magnification digital microscope. Samples were serially sectioned into 200-μm slices for histological analysis using polarized light microscopy (PLM) and transverse microradiography (TMR). Two independent examiners evaluated the presence of demineralization at the sample margin using visible detection with 10× magnification and NIR images presented digitally. Composite restorations were placed in sixteen sound teeth (n=16) and imaged at multiple NIR wavelengths ranging from λ=1300 to 1700-nm using NIR transillumination. The image contrast was calculated between the composite and sound tooth structure. Intensity changes in NIR images at wavelengths ranging from 1300 to 1700-nm correlate with increased mineral loss measured using TMR. NIR reflectance and transillumination at wavelengths coincident with increased water absorption yielded significantly higher (P<0.001) contrast between sound enamel and adjacent demineralized enamel. In addition, NIR reflectance exhibited significantly higher (P<0.01) contrast between sound enamel and adjacent composite restorations than visible reflectance. This study shows that NIR imaging is well suited for the rapid screening of secondary caries lesions. Copyright © 2016 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Belli, Renan; Wendler, Michael; de Ligny, Dominique; Cicconi, Maria Rita; Petschelt, Anselm; Peterlik, Herwig; Lohbauer, Ulrich
2017-01-01
A deeper understanding of the mechanical behavior of dental restorative materials requires an insight into the materials elastic constants and microstructure. Here we aim to use complementary methodologies to thoroughly characterize chairside CAD/CAM materials and discuss the benefits and limitations of different analytical strategies. Eight commercial CAM/CAM materials, ranging from polycrystalline zirconia (e.max ZirCAD, Ivoclar-Vivadent), reinforced glasses (Vitablocs Mark II, VITA; Empress CAD, Ivoclar-Vivadent) and glass-ceramics (e.max CAD, Ivoclar-Vivadent; Suprinity, VITA; Celtra Duo, Dentsply) to hybrid materials (Enamic, VITA; Lava Ultimate, 3M ESPE) have been selected. Elastic constants were evaluated using three methods: Resonant Ultrasound Spectroscopy (RUS), Resonant Beam Technique (RBT) and Ultrasonic Pulse-Echo (PE). The microstructures were characterized using Scanning Electron Microscopy (SEM), Energy Dispersive X-ray Spectroscopy (EDX), Raman Spectroscopy and X-ray Diffraction (XRD). Young's modulus (E), Shear modulus (G), Bulk modulus (B) and Poisson's ratio (ν) were obtained for each material. E and ν reached values ranging from 10.9 (Lava Ultimate) to 201.4 (e.max ZirCAD) and 0.173 (Empress CAD) to 0.47 (Lava Ultimate), respectively. RUS showed to be the most complex and reliable method, while the PE method the easiest to perform but most unreliable. All dynamic methods have shown limitations in measuring the elastic constants of materials showing high damping behavior (hybrid materials). SEM images, Raman spectra and XRD patterns were made available for each material, showing to be complementary tools in the characterization of their crystal phases. Here different methodologies are compared for the measurement of elastic constants and microstructural characterization of CAD/CAM restorative materials. The elastic properties and crystal phases of eight materials are herein fully characterized. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Simon, Jacob C.; Lucas, Seth; Lee, Robert; Darling, Cynthia L.; Staninec, Michal; Vanderhobli, Ram; Pelzner, Roger; Fried, Daniel
2016-01-01
Background and Objectives Current clinical methods for diagnosing secondary caries are unreliable for identifying the early stages of decay around restorative materials. The objective of this study was to access the integrity of restoration margins in natural teeth using near-infrared (NIR) reflectance and transillumination images at wavelengths between 1300–1700-nm and to determine the optimal NIR wavelengths for discriminating composite materials from dental hard tissues. Materials and Methods Twelve composite margins (n=12) consisting of class I, II & V restorations were chosen from ten extracted teeth. The samples were imaged in vitro using NIR transillumination and reflectance, polarization sensitive optical coherence tomography (PS-OCT) and a high-magnification digital microscope. Samples were serially sectioned into 200–μm slices for histological analysis using polarized light microscopy (PLM) and transverse microradiography (TMR). Two independent examiners evaluated the presence of demineralization at the sample margin using visible detection with 10× magnification and NIR images presented digitally. Composite restorations were placed in sixteen sound teeth (n=16) and imaged at multiple NIR wavelengths ranging from λ=1300–1700-nm using NIR transillumination. The image contrast was calculated between the composite and sound tooth structure. Results Intensity changes in NIR images at wavelengths ranging from 1300–1700-nm correlate with increased mineral loss measured using TMR. NIR reflectance and transillumination at wavelengths coincident with increased water absorption yielded significantly higher (P<0.001) contrast between sound enamel and adjacent demineralized enamel. In addition, NIR reflectance exhibited significantly higher (P<0.01) contrast between sound enamel and adjacent composite restorations than visible reflectance. Significance This study shows that NIR imaging is well suited for the rapid screening of secondary caries lesions. PMID:26876234
Reliable quantum communication over a quantum relay channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu; Imre, Sandor
2014-12-04
We show that reliable quantum communication over an unreliable quantum relay channels is possible. The coding scheme combines the results on the superadditivity of quantum channels and the efficient quantum coding approaches.
Surface code quantum communication.
Fowler, Austin G; Wang, David S; Hill, Charles D; Ladd, Thaddeus D; Van Meter, Rodney; Hollenberg, Lloyd C L
2010-05-07
Quantum communication typically involves a linear chain of repeater stations, each capable of reliable local quantum computation and connected to their nearest neighbors by unreliable communication links. The communication rate of existing protocols is low as two-way classical communication is used. By using a surface code across the repeater chain and generating Bell pairs between neighboring stations with probability of heralded success greater than 0.65 and fidelity greater than 0.96, we show that two-way communication can be avoided and quantum information can be sent over arbitrary distances with arbitrarily low error at a rate limited only by the local gate speed. This is achieved by using the unreliable Bell pairs to measure nonlocal stabilizers and feeding heralded failure information into post-transmission error correction. Our scheme also applies when the probability of heralded success is arbitrarily low.
Reliability of CHAMP Anomaly Continuations
NASA Technical Reports Server (NTRS)
vonFrese, Ralph R. B.; Kim, Hyung Rae; Taylor, Patrick T.; Asgharzadeh, Mohammad F.
2003-01-01
CHAMP is recording state-of-the-art magnetic and gravity field observations at altitudes ranging over roughly 300 - 550 km. However, anomaly continuation is severely limited by the non-uniqueness of the process and satellite anomaly errors. Indeed, our numerical anomaly simulations from satellite to airborne altitudes show that effective downward continuations of the CHAMP data are restricted to within approximately 50 km of the observation altitudes while upward continuations can be effective over a somewhat larger altitude range. The great unreliability of downward continuation requires that the satellite geopotential observations must be analyzed at satellite altitudes if the anomaly details are to be exploited most fully. Given current anomaly error levels, joint inversion of satellite and near- surface anomalies is the best approach for implementing satellite geopotential observations for subsurface studies. We demonstrate the power of this approach using a crustal model constrained by joint inversions of near-surface and satellite magnetic and gravity observations for Maude Rise, Antarctica, in the southwestern Indian Ocean. Our modeling suggests that the dominant satellite altitude magnetic anomalies are produced by crustal thickness variations and remanent magnetization of the normal polarity Cretaceous Quiet Zone.
ERIC Educational Resources Information Center
Lumsden, James
1977-01-01
Person changes can be of three kinds: developmental trends, swells, and tremors. Person unreliability in the tremor sense (momentary fluctuations) can be estimated from person characteristic curves. Average person reliability for groups can be compared from item characteristic curves. (Author)
NASA Technical Reports Server (NTRS)
Landmann, A. E.; Tillema, H. F.; Marshall, S. E.
1989-01-01
The application of selected analysis techniques to low frequency cabin noise associated with advanced propeller engine installations is evaluated. Three design analysis techniques were chosen for evaluation including finite element analysis, statistical energy analysis (SEA), and a power flow method using element of SEA (computer program Propeller Aircraft Interior Noise). An overview of the three procedures is provided. Data from tests of a 727 airplane (modified to accept a propeller engine) were used to compare with predictions. Comparisons of predicted and measured levels at the end of the first year's effort showed reasonable agreement leading to the conclusion that each technique had value for propeller engine noise predictions on large commercial transports. However, variations in agreement were large enough to remain cautious and to lead to recommendations for further work with each technique. Assessment of the second year's results leads to the conclusion that the selected techniques can accurately predict trends and can be useful to a designer, but that absolute level predictions remain unreliable due to complexity of the aircraft structure and low modal densities.
Reliability of Lactation Assessment Tools Applied to Overweight and Obese Women.
Chapman, Donna J; Doughty, Katherine; Mullin, Elizabeth M; Pérez-Escamilla, Rafael
2016-05-01
The interrater reliability of lactation assessment tools has not been evaluated in overweight/obese women. This study aimed to compare the interrater reliability of 4 lactation assessment tools in this population. A convenience sample of 45 women (body mass index > 27.0) was videotaped while breastfeeding (twice daily on days 2, 4, and 7 postpartum). Three International Board Certified Lactation Consultants independently rated each videotaped session using 4 tools (Infant Breastfeeding Assessment Tool [IBFAT], modified LATCH [mLATCH], modified Via Christi [mVC], and Riordan's Tool [RT]). For each day and tool, we evaluated interrater reliability with 1-way repeated-measures analyses of variance, intraclass correlation coefficients (ICCs), and percentage absolute agreement between raters. Analyses of variance showed significant differences between raters' scores on day 2 (all scales) and day 7 (RT). Intraclass correlation coefficient values reflected good (mLATCH) to excellent reliability (IBFAT, mVC, and RT) on days 2 and 7. All day 4 ICCs reflected good reliability. The ICC for mLATCH was significantly lower than all others on day 2 and was significantly lower than IBFAT (day 7). Percentage absolute interrater agreement for scale components ranged from 31% (day 2: observable swallowing, RT) to 92% (day 7: IBFAT, fixing; and mVC, latch time). Swallowing scores on all scales had the lowest levels of interrater agreement (31%-64%). We demonstrated differences in the interrater reliability of 4 lactation assessment tools when applied to overweight/obese women, with the lowest values observed on day 4. Swallowing assessment was particularly unreliable. Researchers and clinicians using these scales should be aware of the differences in their psychometric behavior. © The Author(s) 2015.
A comparison of reliability of soil Cd determination by standard spectrometric methods
McBride, M.B.
2015-01-01
Inductively coupled plasma emission spectrometry (ICP-OES) is the most common method for determination of soil Cd, yet spectral and matrix interferences affect measurements at the available analytical wavelengths for this metal. This study evaluated the severity of the interference over a range of total soil Cd by comparing ICP-OES and ICP-MS measurements of Cd in acid digests. ICP-OES using the emission at 226.5 nm generally unable to quantify soil Cd at low (near-background) levels, and gave unreliable values compared to ICP-MS. Using the line at 228.nm, a marked positive bias in Cd measurement (relative to the 226.5 nm measurement) was attributable to As interference even at soil As concentrations below 10 mg/kg. This spectral interference in ICP-OES was severe in As-contaminated orchard soils, giving a false value for soil total Cd near 2 mg kg−1 when soil As was 100–150 mg kg−1. In attempting to avoid these ICP emission-specific interferences, we evaluated a method to estimate total soil Cd using 1 M HNO3 extraction followed by determination of Cd by flame atomic absorption (FAA), either with or without pre-concentration of Cd using an Aliquat-heptanone extractant. The 1 M HNO3 extracted an average of 82% of total soil Cd. The FAA method had no significant interferences, and estimated the total Cd concentrations in all soils tested with acceptable accuracy. For Cd-contaminated soils, the Aliquat-heptanone pre-concentration step was not necessary, as FAA sensitivity was adequate for quantification of extractable soil Cd and reliable estimation of total soil Cd. PMID:22031569
Assessment of the reliability of standard automated perimetry in regions of glaucomatous damage.
Gardiner, Stuart K; Swanson, William H; Goren, Deborah; Mansberger, Steven L; Demirel, Shaban
2014-07-01
Visual field testing uses high-contrast stimuli in areas of severe visual field loss. However, retinal ganglion cells saturate with high-contrast stimuli, suggesting that the probability of detecting perimetric stimuli may not increase indefinitely as contrast increases. Driven by this concept, this study examines the lower limit of perimetric sensitivity for reliable testing by standard automated perimetry. Evaluation of a diagnostic test. A total of 34 participants with moderate to severe glaucoma; mean deviation at their last clinic visit averaged -10.90 dB (range, -20.94 to -3.38 dB). A total of 75 of the 136 locations tested had a perimetric sensitivity of ≤ 19 dB. Frequency-of-seeing curves were constructed at 4 nonadjacent visual field locations by the Method of Constant Stimuli (MOCS), using 35 stimulus presentations at each of 7 contrasts. Locations were chosen a priori and included at least 2 with glaucomatous damage but a sensitivity of ≥ 6 dB. Cumulative Gaussian curves were fit to the data, first assuming a 5% false-negative rate and subsequently allowing the asymptotic maximum response probability to be a free parameter. The strength of the relation (R(2)) between perimetric sensitivity (mean of last 2 clinic visits) and MOCS sensitivity (from the experiment) for all locations with perimetric sensitivity within ± 4 dB of each selected value, at 0.5 dB intervals. Bins centered at sensitivities ≥ 19 dB always had R(2) >0.1. All bins centered at sensitivities ≤ 15 dB had R(2) <0.1, an indication that sensitivities are unreliable. No consistent conclusions could be drawn between 15 and 19 dB. At 57 of the 81 locations with perimetric sensitivity <19 dB, including 49 of the 63 locations ≤ 15 dB, the fitted asymptotic maximum response probability was <80%, consistent with the hypothesis of response saturation. At 29 of these locations the asymptotic maximum was <50%, and so contrast sensitivity (50% response rate) is undefined. Clinical visual field testing may be unreliable when visual field locations have sensitivity below approximately 15 to 19 dB because of a reduction in the asymptotic maximum response probability. Researchers and clinicians may have difficulty detecting worsening sensitivity in these visual field locations, and this difficulty may occur commonly in patients with glaucoma with moderate to severe glaucomatous visual field loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.
2012-01-01
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231
Traffic congestion and reliability : linking solutions to problems.
DOT National Transportation Integrated Search
2004-07-19
The Traffic Congestion and Reliability: Linking Solutions to Problems Report provides : a snapshot of congestion in the United States by summarizing recent trends in : congestion, highlighting the role of unreliable travel times in the effects of con...
Megatrends: Megahype, Megabad.
ERIC Educational Resources Information Center
Goldman, Louis
1983-01-01
Criticizes John Naisbitt's bestselling novel, "Megatrends," for reifying constructs (industrial society and information society), treating these entities as mutually exclusive, and endowing them with a life cycle. In addition, claims the novel is marred by faddish jargon and is statistically unreliable. (MLF)
QUALITY ASSESSMENT OF CONFOCAL MICROSCOPY SLIDE-BASED SYSTEMS: INSTABLITY
Background: All slide-based fluorescence cytometry detections systems basically include an excitation light source, intermediate optics, and a detection device (CCD or PMT). Occasionally, this equipment becomes unstable, generating unreliable and inferior data. Methods: A num...
The Reliability of Psychiatric Diagnosis Revisited
Rankin, Eric; France, Cheryl; El-Missiry, Ahmed; John, Collin
2006-01-01
Background: The authors reviewed the topic of reliability of psychiatric diagnosis from the turn of the 20th century to present. The objectives of this paper are to explore the reasons of unreliability of psychiatric diagnosis and propose ways to improve the reliability of psychiatric diagnosis. Method: The authors reviewed the literature on the concept of reliability of psychiatric diagnosis with emphasis on the impact of interviewing skills, use of diagnostic criteria, and structured interviews on the reliability of psychiatric diagnosis. Results: Causes of diagnostic unreliability are attributed to the patient, the clinician and psychiatric nomenclature. The reliability of psychiatric diagnosis can be enhanced by using diagnostic criteria, defining psychiatric symptoms and structuring the interviews. Conclusions: The authors propose the acronym ‘DR.SED,' which stands for diagnostic criteria, reference definitions, structuring the interview, clinical experience, and data. The authors recommend that clinicians use the DR.SED paradigm to improve the reliability of psychiatric diagnoses. PMID:21103149
A multistage motion vector processing method for motion-compensated frame interpolation.
Huang, Ai- Mei; Nguyen, Truong Q
2008-05-01
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
NASA Astrophysics Data System (ADS)
Li, Cong; Jing, Hui; Wang, Rongrong; Chen, Nan
2018-05-01
This paper presents a robust control schema for vehicle lateral motion regulation under unreliable communication links via controller area network (CAN). The communication links between the system plant and the controller are assumed to be imperfect and therefore the data packet dropouts occur frequently. The paper takes the form of parallel distributed compensation and treats the dropouts as random binary numbers that form Bernoulli distribution. Both of the tire cornering stiffness uncertainty and external disturbances are considered to enhance the robustness of the controller. In addition, a robust H∞ static output-feedback control approach is proposed to realize the lateral motion control with relative low cost sensors. The stochastic stability of the closed-loop system and conservation of the guaranteed H∞ performance are investigated. Simulation results based on CarSim platform using a high-fidelity and full-car model verify the effectiveness of the proposed control approach.
When Reputation Enforces Evolutionary Cooperation in Unreliable MANETs.
Tang, Changbing; Li, Ang; Li, Xiang
2015-10-01
In self-organized mobile ad hoc networks (MANETs), network functions rely on cooperation of self-interested nodes, where a challenge is to enforce their mutual cooperation. In this paper, we study cooperative packet forwarding in a one-hop unreliable channel which results from loss of packets and noisy observation of transmissions. We propose an indirect reciprocity framework based on evolutionary game theory, and enforce cooperation of packet forwarding strategies in both structured and unstructured MANETs. Furthermore, we analyze the evolutionary dynamics of cooperative strategies and derive the threshold of benefit-to-cost ratio to guarantee the convergence of cooperation. The numerical simulations verify that the proposed evolutionary game theoretic solution enforces cooperation when the benefit-to-cost ratio of the altruistic exceeds the critical condition. In addition, the network throughput performance of our proposed strategy in structured MANETs is measured, which is in close agreement with that of the full cooperative strategy.
Behavior-Based Cleaning for Unreliable RFID Data Sets
Fan, Hua; Wu, Quanyuan; Lin, Yisong
2012-01-01
Radio Frequency IDentification (RFID) technology promises to revolutionize the way we track items and assets, but in RFID systems, missreading is a common phenomenon and it poses an enormous challenge to RFID data management, so accurate data cleaning becomes an essential task for the successful deployment of systems. In this paper, we present the design and development of a RFID data cleaning system, the first declarative, behavior-based unreliable RFID data smoothing system. We take advantage of kinematic characteristics of tags to assist in RFID data cleaning. In order to establish the conversion relationship between RFID data and kinematic parameters of the tags, we propose a movement behavior detection model. Moreover, a Reverse Order Filling Mechanism is proposed to ensure a more complete access to get the movement behavior characteristics of tag. Finally, we validate our solution with a common RFID application and demonstrate the advantages of our approach through extensive simulations. PMID:23112595
Behavior-based cleaning for unreliable RFID data sets.
Fan, Hua; Wu, Quanyuan; Lin, Yisong
2012-01-01
Radio Frequency IDentification (RFID) technology promises to revolutionize the way we track items and assets, but in RFID systems, missreading is a common phenomenon and it poses an enormous challenge to RFID data management, so accurate data cleaning becomes an essential task for the successful deployment of systems. In this paper, we present the design and development of a RFID data cleaning system, the first declarative, behavior-based unreliable RFID data smoothing system. We take advantage of kinematic characteristics of tags to assist in RFID data cleaning. In order to establish the conversion relationship between RFID data and kinematic parameters of the tags, we propose a movement behavior detection model. Moreover, a Reverse Order Filling Mechanism is proposed to ensure a more complete access to get the movement behavior characteristics of tag. Finally, we validate our solution with a common RFID application and demonstrate the advantages of our approach through extensive simulations.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Safwat, Osama; Elkateb, Mona; Dowidar, Karin; El Meligy, Omar
To evaluate the clinical changes in dentin of deep carious lesions in young permanent molars, following ozone application with and without the use of a remineralizing solution, using the stepwise excavation. The sample included 162 first permanent immature molars, showing deep occlusal carious cavities that were indicated for indirect pulp capping. Teeth were divided into 2 main groups according to the method of ozone treatment. Each group was further subdivided equally into test and control subgroups. Following caries excavation, color, consistency and DIAGNOdent assessments of dentin were evaluated after 6 and 12 months. Regarding dentin color and consistency, no significant differences were observed following ozone application, with and without a remineralizing solution. There were no significant differences between ozone treatment, and calcium hydroxide during the different evaluation periods, except in group I cases after 6 months, concerning the dentin color. The DIAGNOdent values were significantly reduced following ozone application, with or without a remineralizing solution, as well as between test and control cases in group I after 6 months. Ozone application through the stepwise excavation had no significant effect on dentin color and consistency in young permanent molars. DIAGNOdent was unreliable in monitoring caries activity.
Therapeutic Management of Feline Chronic Gingivostomatitis: A Systematic Review of the Literature
Winer, Jenna N.; Arzi, Boaz; Verstraete, Frank J. M.
2016-01-01
Feline chronic gingivostomatitis (FCGS) is a disease characterized by protracted and potentially debilitating oral inflammation in cats, the etiology of which is currently unknown. The purpose of this review is to apply an evidence-based medicine approach to systematically review and critically evaluate the scientific literature reporting the outcome of medical and surgical management of FCGS. Those articles meeting inclusion criteria were reviewed and assigned an “Experimental Design Grade” (EDG) and an “Evidence Grade” (EG) in order to score relative strength of study design and produced data. Studies were evaluated and compared, especially highlighting the treatments, the outcomes, and the therapeutic success rates. This review found a lack of consistency between articles’ data, rendering direct comparison of results unreliable. The field of FCGS research, and ultimately patient care, would benefit from standardizing studies by adopting use of a consistent semi-quantitative scoring system and extending follow-up duration. Future researchers should commit to large prospective studies that compare existing treatments and demonstrate the promise of new treatments. PMID:27486584
Evaluation of wireless Local Area Networks
NASA Astrophysics Data System (ADS)
McBee, Charles L.
1993-09-01
This thesis is an in-depth evaluation of the current wireless Local Area Network (LAN) technologies. Wireless LAN's consist of three technologies: they are infrared light, microwave, and spread spectrum. When the first wireless LAN's were introduced, they were unfavorably labeled slow, expensive, and unreliable. The wireless LAN's of today are competitively priced, more secure, easier to install, and provide equal to or greater than the data throughput of unshielded twisted pair cable. Wireless LAN's are best suited for organizations that move office staff frequently, buildings that have historical significance, or buildings that have asbestos. Additionally, an organization may realize a cost savings of between $300 to $1,200 each time a node is moved. Current wireless LAN technologies have a positive effect on LAN standards being developed by the Defense Information System Agency (DISA). DoD as a whole is beginning to focus on wireless LAN's and mobile communications. If system managers want to remain successful, they need to stay abreast of this technology.
NASA Astrophysics Data System (ADS)
Lam, C. Y.; Ip, W. H.
2012-11-01
A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.
Casey, David G; Domijan, Katarina; MacNeill, Sarah; Rizet, Damien; O'Connell, Declan; Ryan, Jennifer
2017-05-01
The persistence of sperm using confirmatory microscopic analysis, the persistence of sperm with tails, time since intercourse (TSI) analysis, and results from the acid phosphatase (AP) reaction from approximately 5581 swabs taken from circa 1450 sexual assault cases are presented. The observed proportions of sperm in the vagina and anus declines significantly after 48 h TSI, and sperm on oral swabs were observed up to 15 h TSI. The AP reaction as a predictor of sperm on intimate swabs is questioned. All AP reaction times gave a low true positive rate; 23% of sperm-positive swabs gave a negative AP reaction time. We show the AP reaction is an unsafe and an unreliable predictor of sperm on intimate swabs. We propose that TSI not AP informs precase assessment and the evaluative approach for sexual assault cases. To help inform an evaluative approach, TSI guidelines are presented. © 2016 American Academy of Forensic Sciences.
Effect of censoring trace-level water-quality data on trend-detection capability
Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.
1984-01-01
Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.
Early Flood Warning in Africa: Results of a Feasibility study in the JUBA, SHABELLE and ZAMBEZI
NASA Astrophysics Data System (ADS)
Pappenberger, F. P.; de Roo, A. D.; Buizza, Roberto; Bodis, Katalin; Thiemig, Vera
2009-04-01
Building on the experiences gained with the European Flood Alert System (EFAS), pilot studies are carried out in three river basins in Africa. The European Flood Alert System, pre-operational since 2003, provides early flood alerts for European rivers. At present, the experiences with the European EFAS system are used to evaluate the feasibility of flood early warning for Africa. Three case studies are carried in the Juba and Shabelle rivers (Somalia and Ethiopia), and in the Zambesi river (Southern Africa). Predictions in these data scarce regions are extremely difficult to make as records of observations are scarce and often unreliable. Meteorological and Discharge observations are used to calibrate and test the model, as well as soils, landuse and topographic data available within the JRC African Observatory. ECMWF ERA-40, ERA-Interim data and re-forecasts of flood events from January to March 1978, and in March 2001 are evaluated to examine the feasibility for early flood warning. First results will be presented.
Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures
Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.
2016-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038
Preparation of PEMFC Electrodes from Milligram-Amounts of Catalyst Powder
Yarlagadda, Venkata; McKinney, Samuel E.; Keary, Cristin L.; ...
2017-06-03
Development of electrocatalysts with higher activity and stability is one of the highest priorities in enabling cost-competitive hydrogen-air fuel cells. Although the rotating disk electrode (RDE) technique is widely used to study new catalyst materials, it has been often shown to be an unreliable predictor of catalyst performance in actual fuel cell operation. Fabrication of membrane electrode assemblies (MEA) for evaluation which are more representative of actual fuel cells generally requires relatively large amounts (>1 g) of catalyst material which are often not readily available in early stages of development. In this study, we present two MEA preparation techniques usingmore » as little as 30 mg of catalyst material, providing methods to conduct more meaningful MEA-based tests using research-level catalysts amounts.« less
Evaluation of a follow-up protocol for patients on chloroquine and hydroxychloroquine treatment.
Sanabria, M R; Toledo-Lucho, S C
2016-01-01
To review the problems found after a new follow-up protocol for patients on chloroquine and hydroxychloroquine treatment. Retrospective study was conducted between May 2012 and January 2013 on the clinical files, retinographies, fundus auto-fluorescence (FAF) images, and central-10 degree visual fields (VF) of patients who were referred to the Ophthalmology Department as they had started treatment with hydroxychloroquine. One hundred twenty-six patients were included; 94.4% were referred from the Rheumatology Department and 5.6% from Dermatology. Mean age was 59.7 years, and 73.8% were women. All of them were on hydroxychloroquine treatment, and 300mg was the most frequent daily dose. Rheumatoid arthritis was the most common diagnosis (40.5%), followed by systemic lupus erythematosus (15.9%). The mean Snellen visual acuity was 0.76, and 26 patients had lens opacities. The VF were normal in 97 patients, 8 had mild to moderate defects with no definite pattern, and in 9 the results were unreliable. Of the 51 patients older than 65years, 16 (31.4%) had altered or unreliable VF. The FAF was normal in 104 patients (82.5%), and abnormal, but consistent with ophthalmoscopic features, in 12 patients (pathological myopia, age related changes, early, middle or late age-related macular degeneration). Visual fields as a reference test for the diagnosis of AP toxicity are not quite reliable for patients over 65. Therefore, the FAF is recommended as primary test, perhaps combined with another objective test, such as SD-OCT instead of VF. Copyright © 2015 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.
van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk
2017-10-01
Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.
Peer-to-peer architectures for exascale computing : LDRD final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.
2010-09-01
The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitousmore » and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.« less
Benefits of Imperfect Conflict Resolution Advisory Aids for Future Air Traffic Control.
Trapsilawati, Fitri; Wickens, Christopher D; Qu, Xingda; Chen, Chun-Hsien
2016-11-01
The aim of this study was to examine the human-automation interaction issues and the interacting factors in the context of conflict detection and resolution advisory (CRA) systems. The issues of imperfect automation in air traffic control (ATC) have been well documented in previous studies, particularly in conflict-alerting systems. The extent to which the prior findings can be applied to an integrated conflict detection and resolution system in future ATC remains unknown. Twenty-four participants were evenly divided into two groups corresponding to a medium- and a high-traffic density condition, respectively. In each traffic density condition, participants were instructed to perform simulated ATC tasks under four automation conditions, including reliable, unreliable with short time allowance to secondary conflict (TAS), unreliable with long TAS, and manual conditions. Dependent variables accounted for conflict resolution performance, workload, situation awareness, and trust in and dependence on the CRA aid, respectively. Imposing the CRA automation did increase performance and reduce workload as compared with manual performance. The CRA aid did not decrease situation awareness. The benefits of the CRA aid were manifest even when it was imperfectly reliable and were apparent across traffic loads. In the unreliable blocks, trust in the CRA aid was degraded but dependence was not influenced, yet the performance was not adversely affected. The use of CRA aid would benefit ATC operations across traffic densities. CRA aid offers benefits across traffic densities, regardless of its imperfection, as long as its reliability level is set above the threshold of assistance, suggesting its application for future ATC. © 2016, Human Factors and Ergonomics Society.
Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events
NASA Astrophysics Data System (ADS)
DeChant, C. M.; Moradkhani, H.
2014-12-01
Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.
Staugaard, Benjamin; Christensen, Peer Brehm; Mössner, Belinda; Hansen, Janne Fuglsang; Madsen, Bjørn Stæhr; Søholm, Jacob; Krag, Aleksander; Thiele, Maja
2016-11-01
Transient elastography (TE) is hampered in some patients by failures and unreliable results. We hypothesized that real time two-dimensional shear wave elastography (2D-SWE), the FibroScan XL probe, and repeated TE exams, could be used to obtain reliable liver stiffness measurements in patients with an invalid TE examination. We reviewed 1975 patients with 5764 TE exams performed between 2007 and 2014, to identify failures and unreliable exams. Fifty-four patients with an invalid TE at their latest appointment entered a comparative feasibility study of TE vs. 2D-SWE. The initial TE exam was successful in 93% (1835/1975) of patients. Success rate increased from 89% to 96% when the XL probe became available (OR: 1.07, 95% CI 1.06-1.09). Likewise, re-examining those with a failed or unreliable TE led to a reliable TE in 96% of patients. Combining availability of the XL probe with TE re-examination resulted in a 99.5% success rate on a per-patient level. When comparing the feasibility of TE vs. 2D-SWE, 96% (52/54) of patients obtained a reliable TE, while 2D-SWE was reliable in 63% (34/54, p < 0.001). The odds of a successful 2D-SWE exam decreased with higher skin-capsule distance (OR = 0.77, 95% CI 0.67-0.98). Transient elastography can be accomplished in nearly all patients by use of the FibroScan XL probe and repeated examinations. In difficult-to-scan patients, the feasibility of TE is superior to 2D-SWE.
A contact-free respiration monitor for smart bed and ambulatory monitoring applications.
Hart, Adam; Tallevi, Kevin; Wickland, David; Kearney, Robert E; Cafazzo, Joseph A
2010-01-01
The development of a contact-free respiration monitor has a broad range of clinical applications in the home and hospital setting. Current approaches suffer from a variety of problems including unreliability, low sensitivity, and high cost. This work describes a novel approach to contact-free respiration monitoring that addresses these shortcomings by employing a highly sensitive capacitance sensor to detect variations in capacitive coupling caused by breathing. A prototype system consisting of a synthetic-metallic pad, sensor electronics, and iPhone interface was built and its performance compared experimentally to the gold standard technique (Respiratory Inductance Plethysmography) on both a healthy volunteer and SimMan robotic mannequin. The prototype sensor effectively captured respiratory movements over breathing rates of 5-55 bpm; achieving an average spectral correlation of 0.88 (CI: 0.86-0.90) and 0.95 (CI: 0.95-0.96) to the gold standard using the SimMan and healthy volunteer respectively.
The flaws and human harms of animal experimentation.
Akhtar, Aysha
2015-10-01
Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.
Fast Risetime Reverse Bias Pulse Failures in SiC PN Junction Diodes
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Fazi, Christian; Parsons, James D.
1996-01-01
SiC-based high temperature power devices are being developed for aerospace systems which will require high reliability. One behavior crucial to power device reliability. To date, it has necessarily been assumed to date is that the breakdown behavior of SiC pn junctions will be similar to highly reliable silicon-based pn junctions. Challenging this assumption, we report the observation of anomalous unreliable reverse breakdown behavior in moderately doped (2-3 x 10(exp 17) cm(exp -3)) small-area 4H- and 6H-SiC pn junction diodes at temperatures ranging from 298 K (25 C) to 873 K (600 C). We propose a mechanism in which carrier emission from un-ionized dopants and deep level defects leads to this unstable behavior. The fundamental instability mechanism is applicable to all wide bandgap semiconductors whose dopants are significantly un-ionized at typical device operating temperatures.
Analysis of ICESat Data Using Kalman Filter and Kriging to Study Height Changes in East Antarctica
NASA Technical Reports Server (NTRS)
Herring, Thomas A.
2005-01-01
We analyze ICESat derived heights collected between Feb. 03-Nov. 04 using a kriging/Kalman filtering approach to investigate height changes in East Antarctica. The model's parameters are height change to an a priori static digital height model, seasonal signal expressed as an amplitude Beta and phase Theta, and height-change rate dh/dt for each (100 km)(exp 2) block. From the Kalman filter results, dh/dt has a mean of -0.06 m/yr in the flat interior of East Antarctica. Spatially correlated pointing errors in the current data releases give uncertainties in the range 0.06 m/yr, making height change detection unreliable at this time. Our test shows that when using all available data with pointing knowledge equivalent to that of Laser 2a, height change detection with an accuracy level 0.02 m/yr can be achieved over flat terrains in East Antarctica.
A Statistical Framework for the Functional Analysis of Metagenomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharon, Itai; Pati, Amrita; Markowitz, Victor
2008-10-01
Metagenomic studies consider the genetic makeup of microbial communities as a whole, rather than their individual member organisms. The functional and metabolic potential of microbial communities can be analyzed by comparing the relative abundance of gene families in their collective genomic sequences (metagenome) under different conditions. Such comparisons require accurate estimation of gene family frequencies. They present a statistical framework for assessing these frequencies based on the Lander-Waterman theory developed originally for Whole Genome Shotgun (WGS) sequencing projects. They also provide a novel method for assessing the reliability of the estimations which can be used for removing seemingly unreliable measurements.more » They tested their method on a wide range of datasets, including simulated genomes and real WGS data from sequencing projects of whole genomes. Results suggest that their framework corrects inherent biases in accepted methods and provides a good approximation to the true statistics of gene families in WGS projects.« less
Tan, John W; Campbell, Dianne E
2013-09-01
Allergic reactions to insect bites and stings are common, and the severity of reactions range from local reaction to anaphylaxis. In children, large local reaction to bites and stings is the most common presentation. Stings from insects of the order Hymenoptera (bees, wasps and ants) are the most common cause of insect anaphylaxis; however, the proportion of insect allergic children who develop anaphylaxis to an insect sting is lower than that of insect allergic adults. History is most important in diagnosing anaphylaxis, as laboratory tests can be unreliable. Venom immunotherapy is effective, where suitable allergen extract is available, but is only warranted in children with systemic reactions to insect venom. Large local reactions are at low risk of progression to anaphylaxis on subsequent stings, and hence, venom immunotherapy is not necessary. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H
2013-08-01
Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.
Synergistic Effects of Toxic Elements on Heat Shock Proteins
Mahmood, Khalid; Mahmood, Qaisar; Irshad, Muhammad; Hussain, Jamshaid
2014-01-01
Heat shock proteins show remarkable variations in their expression levels under a variety of toxic conditions. A research span expanded over five decades has revealed their molecular characterization, gene regulation, expression patterns, vast similarity in diverse groups, and broad range of functional capabilities. Their functions include protection and tolerance against cytotoxic conditions through their molecular chaperoning activity, maintaining cytoskeleton stability, and assisting in cell signaling. However, their role as biomarkers for monitoring the environmental risk assessment is controversial due to a number of conflicting, validating, and nonvalidating reports. The current knowledge regarding the interpretation of HSPs expression levels has been discussed in the present review. The candidature of heat shock proteins as biomarkers of toxicity is thus far unreliable due to synergistic effects of toxicants and other environmental factors. The adoption of heat shock proteins as “suit of biomarkers in a set of organisms” requires further investigation. PMID:25136596
Mammalian species - Neotoma magister
Steven B. Castleberry; Michael T. Mengak; W. Mark Ford
2006-01-01
External morphology of N. magister (Fig. 1) is similar to that of N. floridana, the only parapatric Neotoma. Although N. magister generally is larger in mass and with longer vibrissae, identification based on single measurements is unreliable because of morphometric overlap (Ray 2000)....
Automatic Refraction: How It Is Done: Some Clinical Results
ERIC Educational Resources Information Center
Safir, Aran; And Others
1973-01-01
Compaired are methods of determining visual refraction needs of young children or other unreliable observers by means of retinosocopy or the Opthalmetron, an automatic instrument which can be operated by a technician with no knowledge of refraction. (DB)
A generalized model for estimating the energy density of invertebrates
James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.
2012-01-01
Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2 = 0.96, p < 0.0001), where ED (as J/g wet mass) was estimated from pDM as ED = 22,960pDM − 174.2. Model evaluation showed that nearly all (98.8%) of the variability between observed and predicted values for invertebrate ED could be attributed to residual error in the model. Regression of observed on predicted values revealed that the 97.5% joint confidence region included the intercept of 0 (−103.0 ± 707.9) and slope of 1 (1.01 ± 0.12). Use of this model requires that only dry and wet mass measurements be obtained, resulting in significant time, sample size, and cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.
32 CFR Appendix D to Part 154 - Reporting of Nonderogatory Cases
Code of Federal Regulations, 2010 CFR
2010-07-01
... abuse of drugs or alcohol, theft or dishonesty, unreliability, irresponsibility, immaturity, instability... promiscuity, aberrant, deviant, or bizarre sexual conduct or behavior, transvestitism, transsexualism, indecent exposure, rape, contributing to the delinquency of a minor, child molestation, wife-swapping...
Should Secondary Schools Buy Local Area Networks?
ERIC Educational Resources Information Center
Hyde, Hartley
1986-01-01
The advantages of microcomputer networks include resource sharing, multiple user communications, and integrating data processing and office automation. This article nonetheless favors stand-alone computers for Australian secondary school classrooms because of unreliable hardware, software design, and copyright problems, and individual progress…
Social media and health care professionals: benefits, risks, and best practices.
Ventola, C Lee
2014-07-01
Health care professionals can use a variety of social media tools to improve or enhance networking, education, and other activities. However, these tools also present some potential risks, such as unreliable information and violations of patients' privacy rights.
DOT National Transportation Integrated Search
2014-01-01
The second Strategic Highway Research Program (SHRP 2) Reliability program aims to improve trip time reliability by reducing the frequency and effects of events that cause travel times to fluctuate unpredictably. Congestion caused by unreliable, or n...
Morphology delimits more species than molecular genetic clusters of invasive Pilosella.
Moffat, Chandra E; Ensing, David J; Gaskin, John F; De Clerck-Floate, Rosemarie A; Pither, Jason
2015-07-01
• Accurate assessments of biodiversity are paramount for understanding ecosystem processes and adaptation to change. Invasive species often contribute substantially to local biodiversity; correctly identifying and distinguishing invaders is thus necessary to assess their potential impacts. We compared the reliability of morphology and molecular sequences to discriminate six putative species of invasive Pilosella hawkweeds (syn. Hieracium, Asteraceae), known for unreliable identifications and historical introgression. We asked (1) which morphological traits dependably discriminate putative species, (2) if genetic clusters supported morphological species, and (3) if novel hybridizations occur in the invaded range.• We assessed 33 morphometric characters for their discriminatory power using the randomForest classifier and, using AFLPs, evaluated genetic clustering with the program structure and subsequently with an AMOVA. The strength of the association between morphological and genotypic dissimilarity was assessed with a Mantel test.• Morphometric analyses delimited six species while genetic analyses defined only four clusters. Specifically, we found (1) eight morphological traits could reliably distinguish species, (2) structure suggested strong genetic differentiation but for only four putative species clusters, and (3) genetic data suggest both novel hybridizations and multiple introductions have occurred.• (1) Traditional floristic techniques may resolve more species than molecular analyses in taxonomic groups subject to introgression. (2) Even within complexes of closely related species, relatively few but highly discerning morphological characters can reliably discriminate species. (3) By clarifying patterns of morphological and genotypic variation of invasive Pilosella, we lay foundations for further ecological study and mitigation. © 2015 Botanical Society of America, Inc.
Gyöngy, Miklós; Kollár, Sára
2015-02-01
One method of estimating sound speed in diagnostic ultrasound imaging consists of choosing the speed of sound that generates the sharpest image, as evaluated by the lateral frequency spectrum of the squared B-mode image. In the current work, simulated and experimental data on a typical (47 mm aperture, 3.3-10.0 MHz response) linear array transducer are used to investigate the accuracy of this method. A range of candidate speeds of sound (1240-1740 m/s) was used, with a true speed of sound of 1490 m/s in simulations and 1488 m/s in experiments. Simulations of single point scatterers and two interfering point scatterers at various locations with respect to each other gave estimate errors of 0.0-2.0%. Simulations and experiments of scatterer distributions with a mean scatterer spacing of at least 0.5 mm gave estimate errors of 0.1-4.0%. In the case of lower scatterer spacing, the speed of sound estimates become unreliable due to a decrease in contrast of the sharpness measure between different candidate speeds of sound. This suggests that in estimating speed of sound in tissue, the region of interest should be dominated by a few, sparsely spaced scatterers. Conversely, the decreasing sensitivity of the sharpness measure to speed of sound errors for higher scatterer concentrations suggests a potential method for estimating mean scatterer spacing. Copyright © 2014 Elsevier B.V. All rights reserved.
18 CFR 806.23 - Standards for water withdrawals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of groundwater or stream flow levels; rendering competing supplies unreliable; affecting other water..., at its own expense, an alternate water supply or other mitigating measures. (iii) Require the project... deficiencies, identify alternative water supply options, and support existing and proposed future withdrawals. ...
ERIC Educational Resources Information Center
Coen, Frank
1969-01-01
The unreliability of first impressions and subjective judgments is the subject of both Jane Austen's "Pride and Prejudice" and Lionel Trilling's "Of This Time, Of That Place"; consequently, the works are worthwhile parallel studies for high school students. Austen, by means of irony and subtle characterization, dramatizes the…
Morphology delimits more species than molecular genetic clusters of invasive Pilosella
USDA-ARS?s Scientific Manuscript database
Premise of the study: Reliable identifications of invasive species are essential for effective management. Several species of Pilosella (syn. Hieracium, Asteraceae) hawkweeds invade North America, where unreliable identification hinders their control. Here we ask (i) do morphological traits dependab...
Procedure for Failure Mode, Effects, and Criticality Analysis (FMECA)
NASA Technical Reports Server (NTRS)
1966-01-01
This document provides guidelines for the accomplishment of Failure Mode, Effects, and Criticality Analysis (FMECA) on the Apollo program. It is a procedure for analysis of hardware items to determine those items contributing most to system unreliability and crew safety problems.
Travel behavior of U.S. domestic airline passengers and its impacts on infrastructure utilization
DOT National Transportation Integrated Search
2009-09-30
Unexpected and unannounced delays and cancellations of flights have emerged as a quasinormal : phenomenon in recent months and years. The airline unreliability has become : unbearable day by day. The volume of airline passengers on domestic routes in...
Software Prototyping: Designing Systems for Users.
ERIC Educational Resources Information Center
Spies, Phyllis Bova
1983-01-01
Reports on major change in computer software development process--the prototype model, i.e., implementation of skeletal system that is enhanced during interaction with users. Expensive and unreliable software, software design errors, traditional development approach, resources required for prototyping, success stories, and systems designer's role…
Second Thoughts at Women's Colleges.
ERIC Educational Resources Information Center
Gose, Ben
1995-01-01
Despite a rise in enrollments at women's colleges nationwide, there is concern that the applicant pool is weakening. Average college entrance test scores of freshmen have dropped considerably since 1968. Some see research comparing women's performance at single-sex and coeducational colleges as unreliable. (MSE)
Outcomes After Diagnostic Hip Injection.
Lynch, T Sean; Steinhaus, Michael E; Popkin, Charles A; Ahmad, Christopher S; Rosneck, James
2016-08-01
To provide a comprehensive review of outcomes associated with local anesthetic (LA) or LA and corticosteroid (CS) diagnostic hip injections, and how well response predicts subsequent operative success. A systematic review from database (PubMed, Medline, Scopus, Embase) inception to January 2015 for English-language articles reporting primary patient outcomes data was performed, excluding studies with >50% underlying osteoarthritis. Studies were assessed by 2 reviewers who collected pertinent data. Seven studies were included, reporting on a total 337 patients undergoing diagnostic hip injection. The mean age was 34.4 years, with 5 studies reporting 94 (35.2%) males and 173 (64.8%) females. One study examined the rate of pain relief with LA (92.5%); 2 CS studies reported relief on a scale from 0% to 100% (no to complete relief), ranging from 61% to 82.3%; and 3 studies used 10-point pain scales, with a CS study noting a pain score of 1.0, an LA study with a score of 3.03, and 1 study using either CS or LA scores of 3 to 5.6. Duration of pain relief was 9.8 (CS) and 2.35 days (LA). By pathology, greatest relief was achieved in acetabular chondral injury (93.3%) and least in cam impingement (81.6%), with clinical and imaging findings being unreliable predictors of relief. One study showed nonresponse to be a strong predictor of negative surgical outcome for femoroacetabular impingement. Diagnostic hip injections provide substantial pain relief for patients with various hip pathologies, with limited data to suggest greatest relief for those with chondral injury. Clinical and imaging findings are unreliable predictors of injection response, and nonresponse to injection is a strong negative predictor of surgical outcome. Future research should focus on elucidating differences by underlying pathology and predicting future operative success. Level IV, systematic review. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Reliability of self-reported antisocial personality disorder symptoms among substance abusers.
Cottler, L B; Compton, W M; Ridenour, T A; Ben Abdallah, A; Gallagher, T
1998-02-01
It is estimated that from 20 to 60% of substance abusers meet criteria for Antisocial Personality Disorder (APD). An accurate and reliable diagnosis is important because persons meeting criteria for APD, by the nature of their disorder, are less likely to change behaviors and more likely to relapse to both substance abuse and high risk behaviors. To understand more about the reliability of the disorder and symptoms of APD, the Diagnostic Interview Schedule Version III-R (DIS) was administered to 453 substance abusers ascertained from treatment programs and from the general population (St Louis Epidemiological Catchment Area (ECA) follow-up study). Estimates of the 1 week, test-retest reliability for the childhood conduct disorder criterion, the adult antisocial behavior criterion, and APD diagnosis fell in the good agreement range, as measured by kappa. The internal consistency of these DIS symptoms was adequate to acceptable. Individual DIS criteria designed to measure childhood conduct disorder ranged from fair to good for most items; reliability was slightly higher for the adult antisocial behavior symptom items. Finally, self-reported 'liars' were no more unreliable in their reports of their behaviors than 'non-liars'.
Historical citizen science to understand and predict climate-driven trout decline
Ninyerola, Miquel; Hermoso, Virgilio; Filipe, Ana Filipa; Pla, Magda; Villero, Daniel; Brotons, Lluís; Delibes, Miguel
2017-01-01
Historical species records offer an excellent opportunity to test the predictive ability of range forecasts under climate change, but researchers often consider that historical records are scarce and unreliable, besides the datasets collected by renowned naturalists. Here, we demonstrate the relevance of biodiversity records developed through citizen-science initiatives generated outside the natural sciences academia. We used a Spanish geographical dictionary from the mid-nineteenth century to compile over 10 000 freshwater fish records, including almost 4 000 brown trout (Salmo trutta) citations, and constructed a historical presence–absence dataset covering over 2 000 10 × 10 km cells, which is comparable to present-day data. There has been a clear reduction in trout range in the past 150 years, coinciding with a generalized warming. We show that current trout distribution can be accurately predicted based on historical records and past and present values of three air temperature variables. The models indicate a consistent decline of average suitability of around 25% between 1850s and 2000s, which is expected to surpass 40% by the 2050s. We stress the largely unexplored potential of historical species records from non-academic sources to open new pathways for long-term global change science. PMID:28077766
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Measuring trends in age at first sex and age at marriage in Manicaland, Zimbabwe.
Cremin, I; Mushati, P; Hallett, T; Mupambireyi, Z; Nyamukapa, C; Garnett, G P; Gregson, S
2009-04-01
To identify reporting biases and to determine the influence of inconsistent reporting on observed trends in the timing of age at first sex and age at marriage. Longitudinal data from three rounds of a population-based cohort in eastern Zimbabwe were analysed. Reports of age at first sex and age at marriage from 6837 individuals attending multiple rounds were classified according to consistency. Survival analysis was used to identify trends in the timing of first sex and marriage. In this population, women initiate sex and enter marriage at younger ages than men but spend much less time between first sex and marriage. Among those surveyed between 1998 and 2005, median ages at first sex and first marriage were 18.5 years and 21.4 years for men and 18.2 years and 18.5 years, respectively, for women aged 15-54 years. High levels of reports of both age at first sex and age at marriage among those attending multiple surveys were found to be unreliable. Excluding reports identified as unreliable from these analyses did not alter the observed trends in either age at first sex or age at marriage. Tracing birth cohorts as they aged revealed reporting biases, particularly among the youngest cohorts. Comparisons by birth cohorts, which span a period of >40 years, indicate that median age at first sex has remained constant over time for women but has declined gradually for men. Although many reports of age at first sex and age at marriage were found to be unreliable, inclusion of such reports did not result in artificial generation or suppression of trends.
Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed
2013-03-01
Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Burton, Catherine E; Sester, Martina; Robinson, Joan L; Eurich, Dean T; Preiksaitis, Jutta K; Urschel, Simon
2018-05-24
Passive antibodies, maternal or transfusion-acquired, make serologic determination of pre-transplant cytomegalovirus (CMV) status unreliable. We evaluated 3 assays un-affected by passive antibodies, in assignment of CMV infection status in children awaiting solid organ transplant and in controls: i) CMV Nucleic Acid Amplification Testing (NAAT), quantification of ii) CMV-specific CD4+T-cells, and iii) CD27-CD28-CD4+T-cells. Our results highlight that CMV NAAT, from urine and oropharynx, is useful in confirming positive CMV status. Detection of CMV-specific CD4+T-cells was sensitive and specific in children >18 months but was less sensitive in children <12 months. CD27-CD28- CD4+T-cells are not likely useful in CMV risk-stratification in children.
Performance evaluation of the use of photovoltaics to power a street light in Lowell
NASA Astrophysics Data System (ADS)
Crowell, Adam B.
Commercial, off-grid photovoltaic (PV) lighting systems present an attractive alternative to traditional outdoor lighting at sites where grid power is unavailable or unreliable. This study presents a comprehensive theoretical site analysis for the installation of standalone PV lighting systems at the Lowell National Historic Park in Lowell, MA. Detailed insolation studies are performed at the target site, resulting in expected daily Watt-hour totals available for battery charging for each month of the year. Illumination simulations are presented, detailing the expected lighting performance of the systems at night. Light levels are compared to those dictated by accepted standards. While it is acknowledged that the target site presents significant challenges to photovoltaics, such as severe shading, final system component specifications are provided, along with programming and positioning recommendations that will yield the best achievable performance.
Ranking Reputation and Quality in Online Rating Systems
Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng
2014-01-01
How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Clinical Diagnosis among Diverse Populations: A Multicultural Perspective.
ERIC Educational Resources Information Center
Solomon, Alison
1992-01-01
Discusses four ways in which clinical diagnosis can be detrimental to minority clients: (1) cultural expressions of symptomatology; (2) unreliable research instruments; (3) clinician bias; and (4) institutional racism. Recommendations to avoid misdiagnosis begin with accurate assessment of a client's history and cultural background. (SLD)
Diesel Powered School Buses: An Update.
ERIC Educational Resources Information Center
Gresham, Robert
1984-01-01
Because diesel engines are more economical and longer-lasting than gasoline engines, school districts are rapidly increasing their use of diesel buses. Dependence on diesel power, however, entails vulnerability to cost increases due to the unreliability of crude oil supplies and contributes to air pollution. (MCG)
Simple Experiments in Psychology.
ERIC Educational Resources Information Center
Ray, Wilbert S.
This material, developed for use in secondary schools, is a programmed-type learning package consisting of an "Instructor's Manual", a "Student's Introduction", and a "Laboratory Manual". The general goal of the program is to teach students to distinguish between reliable and unreliable information. The "Laboratory Manual" contains nine simple…
Effect of Training on Reasoning in Moral Choice.
ERIC Educational Resources Information Center
Kaplan, Martin F.
Moral development is viewed as a matter of progression in the cognitive reasoning and rationale underlying choices and judgments. Traditionally, retrospective reports of rationales have been used to measure moral development levels, resulting in unreliable information. Information Integration Theory attempts to assess individual differences in…
USDA-ARS?s Scientific Manuscript database
Sparganothis sulfureana Clemens, is a severe insect pest of cranberries in the Midwest and Northeast. Timing for insecticide applications has relied primarily on calendar dates and pheromone trap-catch. However, abiotic conditions can vary greatly, rendering such methods unreliable indicators of opt...
Disordered Eating among Female Adolescents: Prevalence, Risk Factors, and Consequences
ERIC Educational Resources Information Center
Bryla, Karen Y.
2003-01-01
Disordered eating among American adolescent females represents a significant health issue in our current cultural climate. Disordered eating receives insufficient attention, however, due to the public's unfamiliarity with symptoms and consequences, absence of treatment options, and unreliable instrumentation to detect disordered eating. Disordered…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-19
... is overfished. However, the SSC rejected as unreliable the absolute values that resulted in the... establish a stock ACL of zero, would result in the largest profit reductions to both the commercial sector...
IMPROVING WILLINGNESS-TO-ACCEPT RESPONSES USING ALTERNATE FORMS OF COMPENSATION
The purpose of this project is to design a pilot survey to investigate why surveys that ask willingness-to-accept compensation questions so often yield unreliable data and whether respondents would find alternate modes of compensation (specifically, public goods) more acceptab...
Reliable Radiation Hybrid Maps: An Efficient Scalable Clustering-based Approach
USDA-ARS?s Scientific Manuscript database
The process of mapping markers from radiation hybrid mapping (RHM) experiments is equivalent to the traveling salesman problem and, thereby, has combinatorial complexity. As an additional problem, experiments typically result in some unreliable markers that reduce the overall quality of the map. We ...
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
Foetal Alcohol Spectrum Disorders: A consideration of sentencing and unreliable confessions.
Douglas, Heather
2015-12-01
While Foetal Alcohol Spectrum Disorders (FASDs) are now a strong focus of policy-makers throughout Australia, they have received strikingly little consideration in Australian criminal courts. Many people who have an FASD are highly suggestible, have difficulty linking their actions to consequences, controlling impulses and remembering things, and thus FASD raises particular issues for appropriate sentencing and the admissibility of evidence. This article considers the approach of Australian criminal courts to FASD. It reviews the recent case of AH v Western Australia which exemplifies the difficulties associated with appropriate sentencing in cases where the accused is likely to have an FASD. The article also considers the implications for Australian courts of the New Zealand case of Pora v The Queen, recently heard by the Privy Council. In this case, the Privy Council accepted expert evidence that people with FASD may confabulate evidence, potentially making their testimony unreliable. The article concludes with an overview of developments in criminal policy and legal response in relation to FASD in the United States, Canada and Australia.
Multi stage unreliable retrial Queueing system with Bernoulli vacation
NASA Astrophysics Data System (ADS)
Radha, J.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this work we considered the Bernoulli vacation in group arrival retrial queues with unreliable server. Here, a server providing service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th, (i=1,2,…k) stage of service, the customer may go to (i+1)th stage with probability θi , or leave the system with probability qi = 1 - θi , (i = 1,2,…k - 1) and qi = 1, (i = k). The server may enjoy vacation (orbit is empty or not) with probability v after finishing the service or continuing the service with probability 1-v. After finishing the vacation, the server search for the customer in the orbit with probability θ or remains idle for new arrival with probability 1-θ. We analyzed the system using the method of supplementary variable.
The (un)reliability of item-level semantic priming effects.
Heyman, Tom; Bruninx, Anke; Hutchison, Keith A; Storms, Gert
2018-04-05
Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength or co-occurrence frequency). The idea is that relatedness varies across prime-target pairs, which should be reflected in the size of the priming effect (e.g., cat should prime dog more than animal does). However, it is only insightful to predict item-level priming effects if they can be measured reliably. Thus, in the present study we examined the split-half and test-retest reliabilities of item-level priming effects under conditions that should discourage the use of strategies. The resulting priming effects proved extremely unreliable, and reanalyses of three published priming datasets revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were unlikely to be successful. However, one study with an unusually large sample size yielded more favorable reliability estimates, suggesting that big data, in terms of items and participants, should be the future for semantic priming research.
Unreliable evoked responses in autism
Dinstein, Ilan; Heeger, David J.; Lorenzi, Lauren; Minshew, Nancy J.; Malach, Rafael; Behrmann, Marlene
2012-01-01
Summary Autism has been described as a disorder of general neural processing, but the particular processing characteristics that might be abnormal in autism have mostly remained obscure. Here, we present evidence of one such characteristic: poor evoked response reliability. We compared cortical response amplitude and reliability (consistency across trials) in visual, auditory, and somatosensory cortices of high-functioning individuals with autism and controls. Mean response amplitudes were statistically indistinguishable across groups, yet trial-by-trial response reliability was significantly weaker in autism, yielding smaller signal-to-noise ratios in all sensory systems. Response reliability differences were evident only in evoked cortical responses and not in ongoing resting-state activity. These findings reveal that abnormally unreliable cortical responses, even to elementary non-social sensory stimuli, may represent a fundamental physiological alteration of neural processing in autism. The results motivate a critical expansion of autism research to determine whether (and how) basic neural processing properties such as reliability, plasticity, and adaptation/habituation are altered in autism. PMID:22998867
Multifuel industrial steam generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesko, J.E.
An inefficient, unreliable steam generation and distribution system at the Red River Army Depot (Texarkana, Tex.), a major industrial facility of the federal government, was replaced with a modern, multifuel-burning steam plant. In the new plant, steam is generated by three high-pressure field-erected boilers burning 100 percent coal, 100 percent refuse, or any combination of the two, while maintaining particulate emissions, SO{sub 2} concentration, and NO{sub x} and chlorine levels at or better than clean air standards. The plant, which has been in operation since 1986, is now part of the Army's Energy/Environment Showcase for demonstrating innovative technology to publicmore » and private operators. When the project began, the Red River depot faced several operational problems. Existing No. 2 oil- and gas- fired boilers in three separate boiler plants were inefficient, unreliable, and difficult to maintain. Extra boilers often had to be leased to provide for needed capacity. In addition, the facility had large quantities of waste to dispose of.« less
Unreliability of classic provocative tests for the diagnosis of growth hormone deficiency.
Mazzola, A; Meazza, C; Travaglino, P; Pagani, S; Frattini, D; Bozzola, E; Corneli, G; Aimaretti, G; Bozzola, M
2008-02-01
In this study we investigated 9 prepubertal children with blunted GH response to classic pharmacological stimuli in contrast with normal auxological evaluation. The children were followed to evaluate their growth velocity for a longer period before starting replacement GH therapy. To evaluate the pituitary reserve a supraphysiologic stimulus such as GHRH plus arginine was used. Serum GH levels were measured by a time-resolved immunofluorimetric assay before and after 1 microg/kg body weight iv injection of GHRH, while serum PRL, IGF-I, and insulin were evaluated only in basal conditions using an automatic immunometric assay. Out of 9 studied subjects, 7 underwent GHRH plus arginine administration and showed a normal GH response; the parents of the remaining 2 children refused the test. Normal serum levels of PRL, IGF-I, insulin, and a normal insulin sensitivity were observed in all children. After 1 yr, the growth rate in each patient was further improved and reached almost normal values. Our results further confirm that the decision to start replacement GH therapy should be based on both auxological parameters and laboratory findings. The GHRH plus arginine test appears to be useful to identify false GH deficiency in children showing a blunted GH response to classic stimuli in contrast with normal growth rate.
Gardiner, Riana Zanarivero; Doran, Erik; Strickland, Kasha; Carpenter-Bundhoo, Luke; Frère, Celine
2014-01-01
Ectothermic vertebrates face many challenges of thermoregulation. Many species rely on behavioral thermoregulation and move within their landscape to maintain homeostasis. Understanding the fine-scale nature of this regulation through tracking techniques can provide a better understanding of the relationships between such species and their dynamic environments. The use of animal tracking and telemetry technology has allowed the extensive collection of such data which has enabled us to better understand the ways animals move within their landscape. However, such technologies do not come without certain costs: they are generally invasive, relatively expensive, can be too heavy for small sized animals and unreliable in certain habitats. This study provides a cost-effective and non-invasive method through photo-identification, to determine fine scale movements of individuals. With our methodology, we have been able to find that male eastern water dragons (Intellagama leuseurii) have home ranges one and a half times larger than those of females. Furthermore, we found intraspecific differences in the size of home ranges depending on the time of the day. Lastly, we found that location mostly influenced females' home ranges, but not males and discuss why this may be so. Overall, we provide valuable information regarding the ecology of the eastern water dragon, but most importantly demonstrate that non-invasive photo-identification can be successfully applied to the study of reptiles.
Mechanisms for Robust Cognition
ERIC Educational Resources Information Center
Walsh, Matthew M.; Gluck, Kevin A.
2015-01-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…
Excess biomass accumulation and activity loss in vapor-phase bioreactors (VPBs) can lead to unreliable long-term operation. In this study, temporal and spatial variations in biomass accumulation, distribution and activity in VPBs treating toluene-contaminated air were monitored o...
POSTERIOR PREDICTIVE MODEL CHECKS FOR DISEASE MAPPING MODELS. (R827257)
Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population si...
Comparison of Maxilla Mandibular Transverse Ratios With Class II Anteroposterior Discrepancies
2014-03-20
the structure points has shown to be at best unreliable (Jacobson 1995). “2D landmarks may be hindered by rotational, geometric , and head positioning...deficiency in Class II and Class III malocclusions: a cephalometric and morphometric study on postero‐ anterior films. Orthodontics & Craniofacial
Specification and Verification of Communication Protocols in AFFIRM Using State Transition Models.
1981-03-01
NewQueueOfftcket; theorem Pendnglnvariant, Remove(Pending(s)) = NewQueueOfPacket; Since the implementation is in keeping with the specification, its salp ...another communication line. The communication lines are unreliable; messages traveling in either direction can be lost, reordered, corrupted, or
Divorce: An Unreliable Predictor of Children's Emotional Predispositions.
ERIC Educational Resources Information Center
Bernard, Janine M.; Nesbitt, Sally
1981-01-01
Used the Children's Emotion Projection Instrument to investigate the emotional predispositions of children from divorce or disruption and children from intact families. Results indicated that children of divorce or disruption are not more hampered emotionally than children from intact families. Discusses implications for family therapists.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banic, A.; Kouris, K.; Lewis, D.H.
1990-10-01
The aim of the study was experimentally to evaluate the capability and reliability of laser Doppler flowmetry (LDF) in conditions of circulatory deficiency, by correlating it to flow-related parameters measured by a radionuclide-imaging technique and using 99m-Tc red blood cells (RBCs). For this purpose, a pedicle island flap in the sheep was used, with well-perfused proximal parts and with evident stasis in the distal third of the flap. No correlation was found between results obtained with the two techniques. In regions with evident stasis, falsely high LDF readings were recorded. This may be due to a back-and-forth motion of themore » RBCs under the probe, rather than to true flow. It was concluded that, while LDF seems reliable in detecting complete arterial occlusion, it is unreliable in predicting either complete venous occlusion or partial obstruction of the flow to and from the flap. Clinical use for this purpose cannot be recommended.« less
Breakwell, Lucy; Anga, Jenniffer; Dadari, Ibrahim; Sadr-Azodi, Nahad; Ogaoga, Divinal; Patel, Minal
2017-05-15
Monovalent Hepatitis B vaccine (HepB) is heat stable, making it suitable for storage outside cold chain (OCC) at 37°C for 1month. We conducted an OCC project in the Solomon Islands to determine the feasibility of and barriers to national implementation and to evaluate impact on coverage. Healthcare workers at 13 facilities maintained monovalent HepB birth dose (HepB-BD) OCC for up to 28days over 7months. Vaccination data were recorded for children born during the project and those born during 7months before the project. Timely HepB-BD coverage among facility and home births increased from 30% to 68% and from 4% to 24%, respectively. Temperature excursions above 37°C were rare, but vaccine wastage was high and shortages common. Storing HepB OCC can increase HepB-BD coverage in countries with insufficient cold chain capacity or numerous home births. High vaccine wastage and unreliable vaccine supply must be addressed for successful implementation. Published by Elsevier Ltd.
Jamal, Wafaa; Saleem, Rola; Rotimi, Vincent O
2013-08-01
The use of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identification of microorganisms directly from blood culture is an exciting dimension to the microbiologists. We evaluated the performance of Bruker SepsiTyper kit™ (STK) for direct identification of bacteria from positive blood culture. This was done in parallel with conventional methods. Nonrepetitive positive blood cultures from 160 consecutive patients were prospectively evaluated by both methods. Of 160 positive blood cultures, the STK identified 114 (75.6%) isolates and routine conventional method 150 (93%). Thirty-six isolates were misidentified or not identified by the kit. Of these, 5 had score of >2.000 and 31 had an unreliable low score of <1.7. Four of 8 yeasts were identified correctly. The average turnaround time using the STK was 35 min, including extraction steps and 30:12 to 36:12 h with routine method. The STK holds promise for timely management of bacteremic patients. Copyright © 2013 Elsevier Inc. All rights reserved.
AN EVALUATION OF INFRARED THERMOGRAPHY FOR DETECTION OF BUMBLEFOOT (PODODERMATITIS) IN PENGUINS.
Duncan, Ann E; Torgerson-White, Lauri L; Allard, Stephanie M; Schneider, Tom
2016-06-01
The objective of this study was to evaluate infrared thermography as a noninvasive screening tool for detection of pododermatitis during the developing and active stages of disease in three species of penguins: king penguin (Aptenodytes patagonicus) , macaroni penguin (Eudyptes chrysolophus), and rockhopper penguin (Eudyptes chrysocome). In total, 67 penguins were examined every 3 mo over a 15-mo period. At each exam, bumblefoot lesions were characterized and measured, and a timed series of thermal images were collected over a 4-min period. Three different methods were compared for analysis of thermograms. Feet with active lesions that compromise the surface of the foot were compared to feet with inactive lesions and no lesions. The hypothesis was that feet with active lesions would have warmer surface temperatures than the other conditions. Analysis of the data showed that although feet with active bumblefoot lesions are warmer than feet with inactive or no lesions, the variability seen in each individual penguin from one exam day to the next and the overlap seen between temperatures from each condition made thermal imaging an unreliable tool for detection of bumblefoot in the species studied.
Serum osmolality and effects of water deprivation in captive Asian elephants (Elephas maximus).
Hall, Natalie H; Isaza, Ramiro; Hall, James S; Wiedner, Ellen; Conrad, Bettina L; Wamsley, Heather L
2012-07-01
Serum from 21 healthy, captive Asian elephants (Elephas maximus) was evaluated by measured and calculated osmolality. Serum osmolality results for this population of Asian elephants had a median of 261 mOsm/kg and an interquartile interval of 258-269 mOsm/kg when measured by freezing point osmometry and a median of 264 mOsm/kg and an interquartile interval of 257-269 mOsm/kg when measured by vapor pressure osmometry. These values are significantly lower than values reported in other mammalian species and have important diagnostic and therapeutic implications. Calculated osmolality produced unreliable results and needs further study to determine an appropriate formula and its clinical application in this species. A 16-hr water deprivation test in 16 Asian elephants induced a small, subclinical, but statistically significant increase in measured serum osmolality. Serum osmolality, blood urea nitrogen, and total protein by refractometer were sensitive indicators of hydration status. Serum osmolality measurement by freezing point or vapor pressure osmometry is a useful adjunct to routine clinical tests in the diagnostic evaluation of elephants.
On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal
NASA Astrophysics Data System (ADS)
Fortunelli, Alessandro; Painelli, Anna
1997-05-01
A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.
Web Page Content and Quality Assessed for Shoulder Replacement.
Matthews, John R; Harrison, Caitlyn M; Hughes, Travis M; Dezfuli, Bobby; Sheppard, Joseph
2016-01-01
The Internet has become a major source for obtaining health-related information. This study assesses and compares the quality of information available online for shoulder replacement using medical (total shoulder arthroplasty [TSA]) and nontechnical (shoulder replacement [SR]) terminology. Three evaluators reviewed 90 websites for each search term across 3 search engines (Google, Yahoo, and Bing). Websites were grouped into categories, identified as commercial or noncommercial, and evaluated with the DISCERN questionnaire. Total shoulder arthroplasty provided 53 unique sites compared to 38 websites for SR. Of the 53 TSA websites, 30% were health professional-oriented websites versus 18% of SR websites. Shoulder replacement websites provided more patient-oriented information at 48%, versus 45% of TSA websites. In total, SR websites provided 47% (42/90) noncommercial websites, with the highest number seen in Yahoo, compared with TSA at 37% (33/90), with Google providing 13 of the 33 websites (39%). Using the nonmedical terminology with Yahoo's search engine returned the most noncommercial and patient-oriented websites. However, the quality of information found online was highly variable, with most websites being unreliable and incomplete, regardless of search term.
Serum osmolality and effects of water deprivation in captive Asian elephants (Elephas maximus)
Hall, Natalie H.; Isaza, Ramiro; Hall, James S.; Wiedner, Ellen; Conrad, Bettina L.; Wamsley, Heather L.
2013-01-01
Serum from 21 healthy, captive Asian elephants (Elephas maximus) was evaluated by measured and calculated osmolality. Serum osmolality results for this population of Asian elephants had a median of 261 mOsm/kg and an interquartile interval of 258–269 mOsm/kg when measured by freezing point osmometry and a median of 264 mOsm/kg and an interquartile interval of 257–269 mOsm/kg when measured by vapor pressure osmometry. These values are significantly lower than values reported in other mammalian species and have important diagnostic and therapeutic implications. Calculated osmolality produced unreliable results and needs further study to determine an appropriate formula and its clinical application in this species. A 16-hr water deprivation test in 16 Asian elephants induced a small, subclinical, but statistically significant increase in measured serum osmolality. Serum osmolality, blood urea nitrogen, and total protein by refractometer were sensitive indicators of hydration status. Serum osmolality measurement by freezing point or vapor pressure osmometry is a useful adjunct to routine clinical tests in the diagnostic evaluation of elephants. PMID:22643341
Ending Conflicts and Vandalism in Knowledge Collaboration of Social Media
ERIC Educational Resources Information Center
Zhao, Haifeng
2013-01-01
Social media provide a multitude of opportunities for knowledge contribution and sharing. However, the content reliability issue has caused comprehensive attention, especially on credible social media, such as Wikipedia. Despite Wikipedia's success with the open editing model, dissenting voices give rise to unreliable content due to two…
Forensic Analysis of Cites-Protected Dalbergia Timber from the Americas
Edgard O. Espinoza; Michael C. Wiemann; Josefina Barajas-Morales; Gabriela D. Chavarria; Pamela J. McClure
2015-01-01
Species identification of logs, planks, and veneers is difficult because they lack the traditional descriptors such as leaves and flowers. An additional challenge is that many transnational shipments have unreliable geographic provenance. Therefore, frequently the lowest taxonomic determination is genus, which allows unscrupulous importers to evade the endangered...
Unreliable Retrial Queues in a Random Environment
2007-09-01
equivalent to the stochasticity of the matrix Ĝ. It is generally known from Perron - Frobenius theory that a given square ma- trix M is stochastic if and...only if its maximum positive eigenvalue (i.e., its Perron eigenvalue) sp(M) is equal to unity. A simple analytical condition that guarantees the
Temperature and humidity control in indirect calorimeter chambers
USDA-ARS?s Scientific Manuscript database
A three-chamber, indirect calorimeter has been a part of the Environmental Laboratory at the U.S. Meat Animal Research Center (MARC) for over 25 yr. Corrosion of the animal chambers and unreliable temperature control forced either major repairs or complete replacement. There is a strong demand for...
A Critique of Divorce Statistics and Their Interpretation.
ERIC Educational Resources Information Center
Crosby, John F.
1980-01-01
Increasingly, appeals to the divorce statistic are employed to substantiate claims that the family is in a state of breakdown and marriage is passe. This article contains a consideration of reasons why the divorce statistics are invalid and/or unreliable as indicators of the present state of marriage and family. (Author)
ERIC Educational Resources Information Center
Graney, Christopher M.
2010-01-01
Is the phenomenon of magnification by a converging lens inconsistent and therefore unreliable? Can a lens magnify one part of an object but not another? Physics teachers and even students familiar with basic optics would answer "no," yet many answer "yes." Numerous telescope users believe that magnification is not a reliable phenomenon in that it…
Perceived Credibility and Eyewitness Testimony of Children with Intellectual Disabilities
ERIC Educational Resources Information Center
Henry, L.; Ridley, A.; Perry, J.; Crane, L.
2011-01-01
Background: Although children with intellectual disabilities (ID) often provide accurate witness testimony, jurors tend to perceive their witness statements to be inherently unreliable. Method: The current study explored the free recall transcripts of child witnesses with ID who had watched a video clip, relative to those of typically developing…
ERIC Educational Resources Information Center
Anthony, Michael A.; Caleb, Derry; Mitchell, Stanley G.
2012-01-01
When standards are absent, people soon notice. They care when products turn out to be of poor quality, are unreliable, or dangerous because of counterfeiting. By positioning their products in relation to a common standard, firms grow the total size of the market, and can focus their innovation efforts in areas where they have a comparative…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Design. 27.601 Section 27.601 Aeronautics... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.601 Design. (a) The rotorcraft may have no design features or details that experience has shown to be hazardous or unreliable. (b...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Design. 27.601 Section 27.601 Aeronautics... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.601 Design. (a) The rotorcraft may have no design features or details that experience has shown to be hazardous or unreliable. (b...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Design. 29.601 Section 29.601 Aeronautics... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.601 Design. (a) The rotorcraft may have no design features or details that experience has shown to be hazardous or unreliable. (b...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Design. 29.601 Section 29.601 Aeronautics... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.601 Design. (a) The rotorcraft may have no design features or details that experience has shown to be hazardous or unreliable. (b...
The complete project will greatly increase the sustainability of small gasoline and/or diesel powered generators that are currently used to supplement or replace an unreliable power grid. This phase will develop the feedstock processing equipment needed to produce syngas bio-...
The Role of Science in Behavioral Disorders.
ERIC Educational Resources Information Center
Kauffman, James M.
1999-01-01
A scientific, rule-governed approach to solving problems suggests the following assumptions: we need different rules for different purposes; rules are grounded in values; the origins and applications of rules are often misunderstood; personal experience and idea popularity are unreliable; and all truths are tentative. Each assumption is related to…
Inferential Procedures for Correlation Coefficients Corrected for Attenuation.
ERIC Educational Resources Information Center
Hakstian, A. Ralph; And Others
1988-01-01
A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)
Library Buildings 2009: The Constant Library
ERIC Educational Resources Information Center
Fox, Bette-Lee
2009-01-01
Can it be only two years, as Alan Jay Lerner once wrote, "since the whole [economic] rigmarole began"? Yet libraries have weathered to varying degrees the unreliability of funding, especially with regard to programming, materials, and hours. Money earmarked years ago is seeing construction through to conclusion; state support has helped out in…
Future Development of Instructional Television.
ERIC Educational Resources Information Center
Barnett, H. J.; Denzau, A. T.
Instructional television (ITV) has been little used in the nation's schools because ITV hardware and software has been unreliable and expensive and teachers have yet to learn to use ITV. The perfection of inexpensive videotape recorders/players (VTR) and inexpensive tapes and cameras could remedy the problem. A package consisting of 10 mobile…
COMPARISON OF GESTATIONAL AGE AT DELIVERY BASED ON LAST MENSTRUAL PERIOD AND EARLY ULTRASOUND
Reported date of last menstrual period (LMP) is commonly used to estimate gestational age but may be unreliable if recall is inaccurate or time between menstruation and ovulation differs from the presumed 15-day interval. Early ultrasound is generally a more accurate method than ...
Enhancing the Internet of Things Architecture with Flow Semantics
ERIC Educational Resources Information Center
DeSerranno, Allen Ronald
2017-01-01
Internet of Things ("IoT") systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within…
Canonical failure modes of real-time control systems: insights from cognitive theory
NASA Astrophysics Data System (ADS)
Wallace, Rodrick
2016-04-01
Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.
Bioinspired magnetic reception and multimodal sensing.
Taylor, Brian K
2017-08-01
Several animals use Earth's magnetic field in concert with other sensor modes to accomplish navigational tasks ranging from local homing to continental scale migration. However, despite extensive research, animal magnetic reception remains poorly understood. Similarly, the Earth's magnetic field offers a signal that engineered systems can leverage to navigate in environments where man-made positioning systems such as GPS are either unavailable or unreliable. This work uses a behavioral strategy inspired by the migratory behavior of sea turtles to locate a magnetic goal and respond to wind when it is present. Sensing is performed using a number of distributed sensors. Based on existing theoretical biology considerations, data processing is performed using combinations of circles and ellipses to exploit the distributed sensing paradigm. Agent-based simulation results indicate that this approach is capable of using two separate magnetic properties to locate a goal from a variety of initial conditions in both noiseless and noisy sensory environments. The system's ability to locate the goal appears robust to noise at the cost of overall path length.
Survey of Ultra-wideband Radar
NASA Astrophysics Data System (ADS)
Mokole, Eric L.; Hansen, Pete
The development of UWB radar over the last four decades is very briefly summarized. A discussion of the meaning of UWB is followed by a short history of UWB radar developments and discussions of key supporting technologies and current UWB radars. Selected UWB radars and the associated applications are highlighted. Applications include detecting and imaging buried mines, detecting and mapping underground utilities, detecting and imaging objects obscured by foliage, through-wall detection in urban areas, short-range detection of suicide bombs, and the characterization of the impulse responses of various artificial and naturally occurring scattering objects. In particular, the Naval Research Laboratory's experimental, low-power, dual-polarized, short-pulse, ultra-high resolution radar is used to discuss applications and issues of UWB radar. Some crucial issues that are problematic to UWB radar are spectral availability, electromagnetic interference and compatibility, difficulties with waveform control/shaping, hardware limitations in the transmission chain, and the unreliability of high-power sources for sustained use above 2 GHz.
NASA Astrophysics Data System (ADS)
Roussel, Sabine; Huchette, Sylvain; Clavier, Jacques; Chauvaud, Laurent
2011-02-01
The ormer, Haliotis tuberculata is the only European abalone species commercially exploited. The determination of growth and age in the wild is an important tool for fisheries and aquaculture management. However, the ageing technique used in the past in the field is unreliable. The stable oxygen isotope composition ( 18O/ 16O) of the shell depends on the temperature and oxygen isotope composition of the ambient sea water. The stable oxygen isotope technique, developed to study paleoclimatological changes in shellfish, was applied to three H. tuberculata specimens collected in north-west Brittany. For the specimens collected, the oxygen isotope ratios of the shell reflected the seasonal cycle in the temperature. From winter-to-winter cycles, estimates of the age and the annual growth increment, ranging from 13 to 55 mm per year were obtained. This study shows that stable oxygen isotopes can be a reliable tool for ageing and growth studies of this abalone species in the wild, and for validating other estimates.
Lewis, Nehama; Gray, Stacy W.; Freres, Derek R.; Hornik, Robert C.
2010-01-01
Patients may bring unreliable information to the physician, complicating the physician–patient relationship, or outside information seeking may complement physician information provision, reinforcing patients’ responsibility for their health. The current descriptive evidence base is weak and focuses primarily on the Internet's effects on physician–patient relations. This study describes how cancer patients bring information to their physicians from a range of sources and are referred by physicians to these sources; the study also examines explanations for these behaviors. Patients with breast, prostate, and colon cancer diagnosed in 2005 (N = 1,594) were randomly drawn from the Pennsylvania Cancer Registry; participants returned mail surveys in Fall 2006 (response rate = 64%). There is evidence that both bringing information to physicians and being referred to other sources reflects patients’ engagement with health information, preference for control in medical decision making, and seeking and scanning for cancer-related information. There is also evidence that patients who bring information from a source are referred back to that source. PMID:20183381
Prevalence of hepatitis A virus in bivalve molluscs sold in Granada (Spain) fish markets.
Moreno Roldán, Elena; Espigares Rodríguez, Elena; Espigares García, Miguel; Fernández-Crehuet Navajas, Milagros
2013-06-01
Viruses are the leading cause of foodborne illness associated with the consumption of raw or slightly cooked contaminated shellfish. The aim of this study was to evaluate the prevalence of hepatitis A virus in molluscs. Standard and real-time reverse transcription-polymerase chain reaction procedures were used to monitor bivalve molluscs from the Granada fish markets (southern Spain) for this human enteric virus. Between February 2009 and October 2010, we collected a total of 329 samples of different types of bivalve molluscs (mussels, smooth clams, striped venus, and grooved clams). The results showed the presence of hepatitis A virus in 8.5% of the 329 samples analyzed. We can therefore confirm that conventional fecal indicators are unreliable for demonstrating the presence or absence of viruses. The presence of hepatitis A virus in molluscs destined for human consumption is a potential health risk in southern Spain.
Carreiro, Stephanie; Chai, Peter R; Carey, Jennifer; Chapman, Brittany; Boyer, Edward W
2017-06-01
Rapid proliferation of mobile technologies in social and healthcare spaces create an opportunity for advancement in research and clinical practice. The application of mobile, personalized technology in healthcare, referred to as mHealth, has not yet become routine in toxicology. However, key features of our practice environment, such as frequent need for remote evaluation, unreliable historical data from patients, and sensitive subject matter, make mHealth tools appealing solutions in comparison to traditional methods that collect retrospective or indirect data. This manuscript describes the features, uses, and costs associated with several of common sectors of mHealth research including wearable biosensors, ingestible biosensors, head-mounted devices, and social media applications. The benefits and novel challenges associated with the study and use of these applications are then discussed. Finally, opportunities for further research and integration are explored with a particular focus on toxicology-based applications.
Queer(ed) risks: life insurance, HIV/AIDS, and the "gay question".
Cobb, Neil
2010-01-01
In 2004 the Association of British Insurers (ABI) issued its second Statement of Best Practice on HIV and Insurance. This prohibited use of the "gay question" (employed by some underwriters in application forms for life insurance to identify heightened risk of infection with HIV), in response to growing criticism that the practice was actuarially unreliable, unfair to gay men, and unnecessary, given the availability of alternative "behaviour-based" risk criteria. While the overhaul of this controversial practice is clearly a victory for gay (male) identity politics, this paper argues that the interests of gay men seem to have dominated at the expense of a more far-reaching critique of the industry's evaluation of infection risk. It contends that a more radical (or "queerer") challenge is needed which can better understand and address the injustices created by criteria for appraising risk of infection that still remain in place.
A stochastic inventory management model for a dual sourcing supply chain with disruptions
NASA Astrophysics Data System (ADS)
Iakovou, Eleftherios; Vlachos, Dimitrios; Xanthopoulos, Anastasios
2010-03-01
As companies continue to globalise their operations and outsource significant portion of their value chain activities, they often end up relying heavily on order replenishments from distant suppliers. The explosion in long-distance sourcing is exposing supply chains and shareholder value at ever increasing operational and disruption risks. It is well established, both in academia and in real-world business environments, that resource flexibility is an effective method for hedging against supply chain disruption risks. In this contextual framework, we propose a single period stochastic inventory decision-making model that could be employed for capturing the trade-off between inventory policies and disruption risks for an unreliable dual sourcing supply network for both the capacitated and uncapacitated cases. Through the developed model, we obtain some important managerial insights and evaluate the merit of contingency strategies in managing uncertain supply chains.
Learning under uncertainty in smart home environments.
Zhang, Shuai; McClean, Sally; Scotney, Bryan; Nugent, Chris
2008-01-01
Technologies and services for the home environment can provide levels of independence for elderly people to support 'ageing in place'. Learning inhabitants' patterns of carrying out daily activities is a crucial component of these technological solutions with sensor technologies being at the core of such smart environments. Nevertheless, identifying high-level activities from low-level sensor events can be a challenge, as information may be unreliable resulting in incomplete data. Our work addresses the issues of learning in the presence of incomplete data along with the identification and the prediction of inhabitants and their activities under such uncertainty. We show via the evaluation results that our approach also offers the ability to assess the impact of various sensors in the activity recognition process. The benefit of this work is that future predictions can be utilised in a proposed intervention mechanism in a real smart home environment.
NetCoDer: A Retransmission Mechanism for WSNs Based on Cooperative Relays and Network Coding
Valle, Odilson T.; Montez, Carlos; Medeiros de Araujo, Gustavo; Vasques, Francisco; Moraes, Ricardo
2016-01-01
Some of the most difficult problems to deal with when using Wireless Sensor Networks (WSNs) are related to the unreliable nature of communication channels. In this context, the use of cooperative diversity techniques and the application of network coding concepts may be promising solutions to improve the communication reliability. In this paper, we propose the NetCoDer scheme to address this problem. Its design is based on merging cooperative diversity techniques and network coding concepts. We evaluate the effectiveness of the NetCoDer scheme through both an experimental setup with real WSN nodes and a simulation assessment, comparing NetCoDer performance against state-of-the-art TDMA-based (Time Division Multiple Access) retransmission techniques: BlockACK, Master/Slave and Redundant TDMA. The obtained results highlight that the proposed NetCoDer scheme clearly improves the network performance when compared with other retransmission techniques. PMID:27258280
Bias neglect: a blind spot in the evaluation of scientific results.
Strickland, Brent; Mercier, Hugo
2014-01-01
Experimenter bias occurs when scientists' hypotheses influence their results, even if involuntarily. Meta-analyses have suggested that in some domains, such as psychology, up to a third of the studies could be unreliable due to such biases. A series of experiments demonstrates that while people are aware of the possibility that scientists can be more biased when the conclusions of their experiments fit their initial hypotheses, they robustly fail to appreciate that they should also be more sceptical of such results. This is true even when participants read descriptions of studies that have been shown to be biased. Moreover, participants take other sources of bias-such as financial incentives-into account, showing that this bias neglect may be specific to theory-driven hypothesis testing. In combination with a common style of scientific reporting, bias neglect could lead the public to accept premature conclusions.
Gram staining in the diagnosis of acute septic arthritis.
Faraj, A A; Omonbude, O D; Godwin, P
2002-10-01
This study aimed at determining the sensitivity and specificity of Gram staining of synovial fluid as a diagnostic tool in acute septic arthritis. A retrospective study was made of 22 patients who had arthroscopic lavage following a provisional diagnosis of acute septic arthritis of the knee joint. Gram stains and cultures of the knee aspirates were compared with the clinical and laboratory parameters, to evaluate their usefulness in diagnosing acute arthritis. All patients who had septic arthritis had pain, swelling and limitation of movement. CRP was elevated in 90% of patients. The incidence of elevated white blood cell count was higher in the group of patients with a positive Gram stain study (60%) as compared to patients with a negative Gram stain study (33%). Gram staining sensitivity was 45%. Its specificity was however 100%. Gram staining is an unreliable tool in early decision making in patients requiring urgent surgical drainage and washout.
Evaluating source separation of plastic waste using conjoint analysis.
Nakatani, Jun; Aramaki, Toshiya; Hanaki, Keisuke
2008-11-01
Using conjoint analysis, we estimated the willingness to pay (WTP) of households for source separation of plastic waste and the improvement of related environmental impacts, the residents' loss of life expectancy (LLE), the landfill capacity, and the CO2 emissions. Unreliable respondents were identified and removed from the sample based on their answers to follow-up questions. It was found that the utility associated with reducing LLE and with the landfill capacity were both well expressed by logarithmic functions, but that residents were indifferent to the level of CO2 emissions even though they approved of CO2 reduction. In addition, residents derived utility from the act of separating plastic waste, irrespective of its environmental impacts; that is, they were willing to practice the separation of plastic waste at home in anticipation of its "invisible effects", such as the improvement of citizens' attitudes toward solid waste issues.
NASA Astrophysics Data System (ADS)
Pattanayak, Subhrendu K.; Yang, Jui-Chen; Whittington, Dale; Bal Kumar, K. C.
2005-02-01
This paper investigates two complementary pieces of data on households' demand for improved water services, coping costs and willingness to pay (WTP), from a survey of 1500 randomly sampled households in Kathmandu, Nepal. We evaluate how coping costs and WTP vary across types of water users and income. We find that households in Kathmandu Valley engage in five main types of coping behaviors: collecting, pumping, treating, storing, and purchasing. These activities impose coping costs on an average household of as much as 3 U.S. dollars per month or about 1% of current incomes, representing hidden but real costs of poor infrastructure service. We find that these coping costs are almost twice as much as the current monthly bills paid to the water utility but are significantly lower than estimates of WTP for improved services. We find that coping costs are statistically correlated with WTP and several household characteristics.
Steiner, Markus; Harrer, Andrea; Himly, Martin
2016-01-01
Immediate drug hypersensitivity reactions (DHRs) resemble typical immunoglobulin E (IgE)-mediated symptoms. Clinical manifestations range from local skin reactions, gastrointestinal and/or respiratory symptoms to severe systemic involvement with potential fatal outcome. Depending on the substance group of the eliciting drug the correct diagnosis is a major challenge. Skin testing and in vitro diagnostics are often unreliable and not reproducible. The involvement of drug-specific IgE is questionable in many cases. The culprit substance (parent drug or metabolite) and potential cross-reacting compounds are difficult to identify, patient history and drug provocation testing often remain the only means for diagnosis. Hence, several groups proposed basophil activation test (BAT) for the diagnosis of immediate DHRs as basophils are well-known effector cells in allergic reactions. However, the usefulness of BAT in immediate DHRs is highly variable and dependent on the drug itself plus its capacity to spontaneously conjugate to serum proteins. Stimulation with pure solutions of the parent drug or metabolites thereof vs. drug-protein conjugates may influence sensitivity and specificity of the test. We thus, reviewed the available literature about the use of BAT for diagnosing immediate DHRs against drug classes such as antibiotics, radio contrast media, neuromuscular blocking agents, non-steroidal anti-inflammatory drugs, and biologicals. Influencing factors like the selection of stimulants or of the identification and activation markers, the stimulation protocol, gating strategies, and cut-off definition are addressed in this overview on BAT performance. The overall aim is to evaluate the suitability of BAT as biomarker for the diagnosis of immediate drug-induced hypersensitivity reactions. PMID:27378928
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
Carvalho, Vitor Oliveira; Guimarães, Guilherme Veiga; Bocchi, Edimar Alcides
2008-01-01
BACKGROUND The relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in heart failure patients either on non-optimized or off beta-blocker therapy is known to be unreliable. The aim of this study was to evaluate the relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in heart failure patients receiving optimized and non-optimized beta-blocker treatment during a treadmill cardiopulmonary exercise test. METHODS A total of 27 sedentary heart failure patients (86% male, 50±12 years) on optimized beta-blocker therapy with a left ventricle ejection fraction of 33±8% and 35 sedentary non-optimized heart failure patients (75% male, 47±10 years) with a left ventricle ejection fraction of 30±10% underwent the treadmill cardiopulmonary exercise test (Naughton protocol). Resting and peak effort values of both the percentage of oxygen consumption reserve and percentage of heart rate reserve were, by definition, 0 and 100, respectively. RESULTS The heart rate slope for the non-optimized group was derived from the points 0.949±0.088 (0 intercept) and 1.055±0.128 (1 intercept), p<0.0001. The heart rate slope for the optimized group was derived from the points 1.026±0.108 (0 intercept) and 1.012±0.108 (1 intercept), p=0.47. Regression linear plots for the heart rate slope for each patient in the non-optimized and optimized groups revealed a slope of 0.986 (almost perfect) for the optimized group, but the regression analysis for the non-optimized group was 0.030 (far from perfect, which occurs at 1). CONCLUSION The relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in patients on optimized beta-blocker therapy was reliable, but this relationship was unreliable in non-optimized heart failure patients. PMID:19060991
Effect of a patient training video on visual field test reliability
Sherafat, H; Spry, P G D; Waldock, A; Sparrow, J M; Diamond, J P
2003-01-01
Aims: To evaluate the effect of a visual field test educational video on the reliability of the first automated visual field test of new patients. Methods: A prospective, randomised, controlled trial of an educational video on visual field test reliability of patients referred to the hospital eye service for suspected glaucoma was undertaken. Patients were randomised to either watch an educational video or a control group with no video. The video group was shown a 4.5 minute audiovisual presentation to familiarise them with the various aspects of visual field examination with particular emphasis on sources of unreliability. Reliability was determined using standard criteria of fixation loss rate less than 20%, false positive responses less than 33%, and false negative responses less than 33%. Results: 244 patients were recruited; 112 in the video group and 132 in the control group with no significant between group difference in age, sex, and density of field defects. A significant improvement in reliability (p=0.015) was observed in the group exposed to the video with 85 (75.9%) patients having reliable results compared to 81 (61.4%) in the control group. The difference was not significant for the right (first tested) eye with 93 (83.0%) of the visual fields reliable in the video group compared to 106 (80.0%) in the control group (p = 0.583), but was significant for the left (second tested) eye with 97 (86.6 %) of the video group reliable versus 97 (73.5%) of the control group (p = 0.011). Conclusions: The use of a brief, audiovisual patient information guide on taking the visual field test produced an improvement in patient reliability for individuals tested for the first time. In this trial the use of the video had most of its impact by reducing the number of unreliable fields from the second tested eye. PMID:12543740
Effect of a patient training video on visual field test reliability.
Sherafat, H; Spry, P G D; Waldock, A; Sparrow, J M; Diamond, J P
2003-02-01
To evaluate the effect of a visual field test educational video on the reliability of the first automated visual field test of new patients. A prospective, randomised, controlled trial of an educational video on visual field test reliability of patients referred to the hospital eye service for suspected glaucoma was undertaken. Patients were randomised to either watch an educational video or a control group with no video. The video group was shown a 4.5 minute audiovisual presentation to familiarize them with the various aspects of visual field examination with particular emphasis on sources of unreliability. Reliability was determined using standard criteria of fixation loss rate less than 20%, false positive responses less than 33%, and false negative responses less than 33%. 244 patients were recruited; 112 in the video group and 132 in the control group with no significant between group difference in age, sex, and density of field defects. A significant improvement in reliability (p=0.015) was observed in the group exposed to the video with 85 (75.9%) patients having reliable results compared to 81 (61.4%) in the control group. The difference was not significant for the right (first tested) eye with 93 (83.0%) of the visual fields reliable in the video group compared to 106 (80.0%) in the control group (p = 0.583), but was significant for the left (second tested) eye with 97 (86.6 %) of the video group reliable versus 97 (73.5%) of the control group (p = 0.011). The use of a brief, audiovisual patient information guide on taking the visual field test produced an improvement in patient reliability for individuals tested for the first time. In this trial the use of the video had most of its impact by reducing the number of unreliable fields from the second tested eye.
Peer-review for selection of oral presentations for conferences: Are we reliable?
Deveugele, Myriam; Silverman, Jonathan
2017-11-01
Although peer-review for journal submission, grant-applications and conference submissions has been called 'a counter- stone of science', and even 'the gold standard for evaluating scientific merit', publications on this topic remain scares. Research that has investigated peer-review reveals several issues and criticisms concerning bias, poor quality review, unreliability and inefficiency. The most important weakness of the peer review process is the inconsistency between reviewers leading to inadequate inter-rater reliability. To report the reliability of ratings for a large international conference and to suggest possible solutions to overcome the problem. In 2016 during the International Conference on Communication in Healthcare, organized by EACH: International Association for Communication in Healthcare, a calibration exercise was proposed and feedback was reported back to the participants of the exercise. Most abstracts, as well as most peer-reviewers, receive and give scores around the median. Contrary to the general assumption that there are high and low scorers, in this group only 3 peer-reviewers could be identified with a high mean, while 7 has a low mean score. Only 2 reviewers gave only high ratings (4 and 5). Of the eight abstracts included in this exercise, only one abstract received a high mean score and one a low mean score. Nevertheless, both these abstracts received both low and high scores; all other abstracts received all possible scores. Peer-review of submissions for conferences are, in accordance with the literature, unreliable. New and creative methods will be needed to give the participants of a conference what they really deserve: a more reliable selection of the best abstracts. More raters per abstract improves the inter-rater reliability; training of reviewers could be helpful; providing feedback to reviewers can lead to less inter-rater disagreement; fostering negative peer-review (rejecting the inappropriate submissions) rather than a positive (accepting the best) could be fruitful for selecting abstracts for conferences. Copyright © 2017 Elsevier B.V. All rights reserved.
Hänse, Maria; Krautwald-Junghanns, Maria-Elisabeth; Reitemeier, Susanne; Einspanier, Almuth; Schmidt, Volker
2013-12-01
Knowledge of the reproductive cycle of male parrots is important for examining the male genital tract and for successful breeding, especially of endangered species. To evaluate different diagnostic methods and criteria concerning the classification of reproductive stages, we examined 20 testicular samples obtained at necropsy in psittacine birds of different species and testicular biopsy samples collected from 9 cockatiels (Nymphicus hollandicus) and 7 rose-ringed parakeets (Psittacula krameri) by endoscopy 4 times over a 12-month period. The testicular reproductive status was assessed histologically and then compared with the macroscopic appearance of the testicles and cytologic results. The histologic examination was nondiagnostic in 19 of 59 testicular biopsy samples. By contrast, the cytologic preparations were diagnostic in 57 of 59 biopsy samples. The results of the cytologic examination coincided with the histologic results in 34 of 38 biopsy samples and 18 of 20 necropsy samples. Macroscopic parameters displayed some differences between reproductive stages but provided an unreliable indication of the reproductive status. These results suggest that microscopic examination of a testicular biopsy sample is a reliable method for evaluating the reproductive status of male parrots and is preferable to the macroscopic evaluation of the testicle. Cytologic examination provides fast preliminary results, even when the histologic preparation is not sufficient for evaluation, but results may be erroneous. Thus, a combination of histologic and cytologic examination is recommended for evaluating testicular reproductive status.
Slowing down of alpha particles in ICF DT plasmas
NASA Astrophysics Data System (ADS)
He, Bin; Wang, Zhi-Gang; Wang, Jian-Guo
2018-01-01
With the effects of the projectile recoil and plasma polarization considered, the slowing down of 3.54 MeV alpha particles is studied in inertial confinement fusion DT plasmas within the plasma density range from 1024 to 1026 cm-3 and the temperature range from 100 eV to 200 keV. It includes the rate of the energy change and range of the projectile, and the partition fraction of its energy deposition to the deuteron and triton. The comparison with other models is made and the reason for their difference is explored. It is found that the plasmas will not be heated by the alpha particle in its slowing down the process once the projectile energy becomes close to or less than the temperature of the electron or the deuteron and triton in the plasmas. This leads to less energy deposition to the deuteron and triton than that if the recoil of the projectile is neglected when the temperature is close to or higher than 100 keV. Our model is found to be able to provide relevant, reliable data in the large range of the density and temperature mentioned above, even if the density is around 1026 cm-3 while the deuteron and triton temperature is below 500 eV. Meanwhile, the two important models [Phys. Rev. 126, 1 (1962) and Phys. Rev. E 86, 016406 (2012)] are found not to work in this case. Some unreliable data are found in the last model, which include the range of alpha particles and the electron-ion energy partition fraction when the electron is much hotter than the deuteron and triton in the plasmas.
USDA-ARS?s Scientific Manuscript database
catfish propagation for decades has been dependent on random mating of male and female channel catfish in ponds. It is simple and has been fairly successful in fulfilling the needs of the US farm-raised catfish industry. However, natural pond spawning is unreliable, unpredictable, and incurs 30 t...
USDA-ARS?s Scientific Manuscript database
Volatile fatty acid concentrations ([VFA], mM) have long been used to assess impact of dietary treatments on ruminal fermentation in vivo. However, discrepancies in statistical results between VFA and VFA pool size (VFAmol), possibly related to ruminal digesta liquid amount (LIQ, kg), suggest issues...
How reliable are amphibian population metrics? A response to Kroll et al.
Hartwell H. Welsh; Karen L. Pope; Clara A. Wheeler
2009-01-01
Kroll et al. [Kroll, A.J., Runge, J.P., MacCracken, J.G., 2009. Unreliable amphibian population metrics may obfuscate more than they reveal. Biological Conservation] criticized our recent advocacy for combining readily attainable metrics of population status to gain insight about relationships between terrestrial plethodontid salamanders and forest succession [Welsh,...
Why Are Experts Correlated? Decomposing Correlations between Judges
ERIC Educational Resources Information Center
Broomell, Stephen B.; Budescu, David V.
2009-01-01
We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative…
Speaker Reliability Guides Children's Inductive Inferences about Novel Properties
ERIC Educational Resources Information Center
Kim, Sunae; Kalish, Charles W.; Harris, Paul L.
2012-01-01
Prior work shows that children can make inductive inferences about objects based on their labels rather than their appearance (Gelman, 2003). A separate line of research shows that children's trust in a speaker's label is selective. Children accept labels from a reliable speaker over an unreliable speaker (e.g., Koenig & Harris, 2005). In the…
Assessment and Placement: Supporting Student Success in College Gateway Courses
ERIC Educational Resources Information Center
Vandal, Bruce
2014-01-01
Evidence is mounting that the vast majority of students who are currently placed into prerequisite remedial education could be successful in gateway college-level courses if they receive additional academic support as a corequisite. Recent research on college placement exams reveals that the exams are unreliable at predicting college success, and…
Manipulating Public Opinion about Trying Juveniles as Adults: An Experimental Study
ERIC Educational Resources Information Center
Steinberg, Laurence; Piquero, Alex R.
2010-01-01
Public attitudes about juvenile crime play a significant role in fashioning juvenile justice policy; variations in the wording of public opinion surveys can produce very different responses and can result in inaccurate and unreliable assessments of public sentiment. Surveys that ask about policy alternatives in vague terms are especially…
ERIC Educational Resources Information Center
Malcarney, Mary-Beth; Horton, Katherine; Seiler, Naomi
2016-01-01
Background: School nurses can provide direct services for children with asthma, educate, and reinforce treatment recommendations to children and their families, and coordinate the school-wide response to students' asthma emergencies. Unfortunately, school-based health services today depend on an unreliable patchwork of funding. Limited state and…
Autocheck: Addressing the Problem of Rural Transportation.
ERIC Educational Resources Information Center
Payne, Guy A.
This paper describes a project implemented by a social worker from the Glynn County School District in rural Georgia to address transportation problems experienced by students and their families. The project aims to assist families who are unable to keep appointments or attend other important events due to unreliable transportation. A county needs…
Surge in Journal Retractions May Mask Decline in Actual Problems
ERIC Educational Resources Information Center
Basken, Paul
2012-01-01
Scientific journals have been retracting unreliable articles at rapidly escalating rates in the past few years, raising concern about whether research faces a burgeoning ethical crisis. Various causes have been suspected, with the common theme being that journals are seeing more cases of plagiarism and fudging of data as researchers and editors…
The Unreliability of References
ERIC Educational Resources Information Center
Barden, Dennis M.
2008-01-01
When search consultants, like the author, are invited to propose their services in support of a college or university seeking new leadership, they are generally asked a fairly standard set of questions. But there is one question that they find among the most difficult to answer: How do they check a candidate's references to ensure that they know…
Identifying Personality Disorders that are Security Risks: Field Test Results
2011-09-01
clinical personality disorders, namely psychopathy, malignant narcissism , and borderline personality organization, can increase the likelihood of...ratings indicated that three personality disorders, psychopathy, malignant narcissism , and borderline personality organization, were associated with...certain clinical personality disorders and unreliable and unsafe behavior in the workplace, disorders such as psychopathy and malignant narcissism
Forest Ecosystem Services As Production Inputs
Subhrendu Pattanayak; David T. Butry
2003-01-01
Are we cutting down tropical forests too rapidly and too extensively? If so, why? Answers to both questions are obscured in some ways by insufficient and unreliable data on the economic worth of forest ecosystem services. It is clear, however, that rapid, excessive cutting of forests can irreversibly and substantively impair ecosystem functions, thereby endangering the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-13
... Station Unit 7. The scrubber adds moisture to the exhaust gas, which condenses as the gas stream cools. According to Indiana Department of Environmental Management (IDEM), the condensation causes unreliable... impairment caused by particulate and light impairment caused by moisture. The scrubber also removes some PM...
The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science
ERIC Educational Resources Information Center
Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.
2017-01-01
A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…
A method for polycrystalline silicon delineation applicable to a double-diffused MOS transistor
NASA Technical Reports Server (NTRS)
Halsor, J. L.; Lin, H. C.
1974-01-01
Method is simple and eliminates requirement for unreliable special etchants. Structure is graded in resistivity to prevent punch-through and has very narrow channel length to increase frequency response. Contacts are on top to permit planar integrated circuit structure. Polycrystalline shield will prevent creation of inversion layer in isolated region.
An identifiable model for informative censoring
Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.
1988-01-01
The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.
ERIC Educational Resources Information Center
Barden, Dennis M.
2008-01-01
There are two kinds of references in administrative hires. The most customary is the "on list" references which a candidate asks one to provide. The second kind of reference is the "off list" variety, of which there are two types. Typical is the call one receives from an acquaintance at the hiring institution asking for the "dirt" on one's…
Speaker Reliability in Preschoolers' Inferences about the Meanings of Novel Words
ERIC Educational Resources Information Center
Sobel, David M.; Sedivy, Julie; Buchanan, David W.; Hennessy, Rachel
2012-01-01
Preschoolers participated in a modified version of the disambiguation task, designed to test whether the pragmatic environment generated by a reliable or unreliable speaker affected how children interpreted novel labels. Two objects were visible to children, while a third was only visible to the speaker (a fact known by the child). Manipulating…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-09
... to produce a descriptive database of existing ferry operations. Recently enacted MAP-21 legislation... Administration (FHWA) Office of Intermodal and Statewide Planning conducted a survey of approximately 250 ferry... designed to target ridership and terminal information that typically produce unreliable and/or incomplete...
Reported last menstrual period (LMP) is commonly used to estimate gestational age (GA) but may be unreliable. Ultrasound in the first trimester is generally considered a highly accurate method of pregnancy dating. The authors compared first trimester report of LMP and first trime...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-17
... likely to worsen, making travel times unreliable. In addition, space constraints limit the potential to... activity is expected to generate increased travel demand. By 2040, statewide population is expected to grow... continuing transportation challenges as evidenced by the following: Constrained Travel Options--While the...
Ghosts in the Machine: Incarcerated Students and the Digital University
ERIC Educational Resources Information Center
Hopkins, Susan
2015-01-01
Providing higher education to offenders in custody has become an increasingly complex business in the age of digital learning. Most Australian prisoners still have no direct access to the internet and relatively unreliable access to information technology. As incarceration is now a business, prisons, like universities, are increasingly subject to…
Preschoolers' Understanding of Subtraction-Related Principles
ERIC Educational Resources Information Center
Baroody, Arthur J.; Lai, Meng-lung; Li, Xia; Baroody, Alison E.
2009-01-01
Little research has focused on an informal understanding of subtractive negation (e.g., 3 - 3 = 0) and subtractive identity (e.g., 3 - 0 = 3). Previous research indicates that preschoolers may have a fragile (i.e., unreliable or localized) understanding of the addition-subtraction inverse principle (e.g., 2 + 1 - 1 = 2). Recognition of a small…
Design of interstellar digital communication links: Some insights from communication engineering
NASA Astrophysics Data System (ADS)
Messerschmitt, David G.; Morrison, Ian S.
2012-09-01
The design of an end-to-end digital interstellar communication system at radio frequencies is discussed, drawing on the disciplines of digital communication engineering and computer network engineering in terrestrial and near-space applications. One goal is a roadmap to the design of such systems, aimed at future designers of either receivers (SETI) or transmitters (METI). In particular we emphasize the implications arising from the impossibility of coordination between transmitter and receiver prior to a receiver's search for a signal. A system architecture based on layering, as commonly used in network and software design, assists in organizing and categorizing the various design issues and identifying dependencies. Implications of impairments introduced in the interstellar medium, such as dispersion, scattering, Doppler, noise, and signal attenuation are discussed. Less fundamental (but nevertheless influential) design issues are the motivations of the transmitter designers and associated resource requirements at both transmitter and receiver. Unreliability is inevitably imposed by non-idealities in the physical communication channel, and this unreliability will have substantial implications for those seeking to convey interstellar messages.
The N-policy for an unreliable server with delaying repair and two phases of service
NASA Astrophysics Data System (ADS)
Choudhury, Gautam; Ke, Jau-Chuan; Tadj, Lotfi
2009-09-01
This paper deals with an MX/G/1 with an additional second phase of optional service and unreliable server, which consist of a breakdown period and a delay period under N-policy. While the server is working with any phase of service, it may break down at any instant and the service channel will fail for a short interval of time. Further concept of the delay time is also introduced. If no customer arrives during the breakdown period, the server becomes idle in the system until the queue size builds up to a threshold value . As soon as the queue size becomes at least N, the server immediately begins to serve the first phase of regular service to all the waiting customers. After the completion of which, only some of them receive the second phase of the optional service. We derive the queue size distribution at a random epoch and departure epoch as well as various system performance measures. Finally we derive a simple procedure to obtain optimal stationary policy under a suitable linear cost structure.
Augmented reality-based electrode guidance system for reliable electroencephalography.
Song, Chanho; Jeon, Sangseo; Lee, Seongpung; Ha, Ho-Gun; Kim, Jonghyun; Hong, Jaesung
2018-05-24
In longitudinal electroencephalography (EEG) studies, repeatable electrode positioning is essential for reliable EEG assessment. Conventional methods use anatomical landmarks as fiducial locations for the electrode placement. Since the landmarks are manually identified, the EEG assessment is inevitably unreliable because of individual variations among the subjects and the examiners. To overcome this unreliability, an augmented reality (AR) visualization-based electrode guidance system was proposed. The proposed electrode guidance system is based on AR visualization to replace the manual electrode positioning. After scanning and registration of the facial surface of a subject by an RGB-D camera, the AR of the initial electrode positions as reference positions is overlapped with the current electrode positions in real time. Thus, it can guide the position of the subsequently placed electrodes with high repeatability. The experimental results with the phantom show that the repeatability of the electrode positioning was improved compared to that of the conventional 10-20 positioning system. The proposed AR guidance system improves the electrode positioning performance with a cost-effective system, which uses only RGB-D camera. This system can be used as an alternative to the international 10-20 system.
Anomaly detection for machine learning redshifts applied to SDSS galaxies
NASA Astrophysics Data System (ADS)
Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen
2015-10-01
We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.
Water system unreliability and diarrhea incidence among children in Guatemala.
Trudeau, Jennifer; Aksan, Anna-Maria; Vásquez, William F
2018-03-01
This article examines the effect of water system unreliability on diarrhea incidence among children aged 0-5 in Guatemala. We use secondary data from a nationally representative sample of 7579 children to estimate the effects of uninterrupted and interrupted water services on diarrhea incidence. The national scope of this study imposes some methodological challenges due to unobserved geographical heterogeneity. To address this issue, we estimate mixed-effects logit models that control for unobserved heterogeneity by estimating random effects of selected covariates that can vary across geographical areas (i.e. water system reliability). Compared to children without access to piped water, children with uninterrupted water services have a lower probability of diarrhea incidence by approximately 33 percentage points. Conversely, there is no differential effect between children without access and those with at least one day of service interruptions in the previous month. Results also confirm negative effects of age, female gender, spanish language, and garbage disposal on diarrhea incidence. Public health benefits of piped water are realized through uninterrupted provision of service, not merely access. Policy implications are discussed.
Metabolic incentives for dishonest signals of strength in the fiddler crab Uca vomeris.
Bywater, Candice L; White, Craig R; Wilson, Robbie S
2014-08-15
To reduce the potential costs of combat, animals may rely upon signals to resolve territorial disputes. Signals also provide a means for individuals to appear better than they actually are, deceiving opponents and gaining access to resources that would otherwise be unattainable. However, other than resource gains, incentives for dishonest signalling remain unexplored. In this study, we tested the idea that unreliable signallers pay lower metabolic costs for their signals, and that energetic savings could represent an incentive for cheating. We focused on two-toned fiddler crabs (Uca vomeris), a species that frequently uses its enlarged claws as signals of dominance to opponents. Previously, we found that regenerated U. vomeris claws are often large but weak (i.e. unreliable). Here, we found that the original claws of male U. vomeris consumed 43% more oxygen than weaker, regenerated claws, suggesting that muscle quantity drives variation in metabolic costs. Therefore, it seems that metabolic savings could provide a powerful incentive for dishonesty within fiddler crabs. © 2014. Published by The Company of Biologists Ltd.
van der Westhuizen, J; Kuo, P Y; Reed, P W; Holder, K
2011-03-01
Gastric absorption of oral paracetamol (acetaminophen) may be unreliable perioperatively in the starved and stressed patient. We compared plasma concentrations of parenteral paracetamol given preoperatively and oral paracetamol when given as premedication. Patients scheduled for elective ear; nose and throat surgery or orthopaedic surgery were randomised to receive either oral or intravenous paracetamol as preoperative medication. The oral dose was given 30 minutes before induction of anaesthesia and the intravenous dose given pre-induction. All patients were given a standardised anaesthetic by the same specialist anaesthetist who took blood for paracetamol concentrations 30 minutes after the first dose and then at 30 minute intervals for 240 minutes. Therapeutic concentrations of paracetamol were reached in 96% of patients who had received the drug parenterally, and 67% of patients who had received it orally. Maximum median plasma concentrations were 19 mg.l(-1) (interquartile range 15 to 23 mg.l(-1)) and 13 mg.l(-1) (interquartile range 0 to 18 mg.l(-1)) for the intravenous and oral group respectively. The difference between intravenous and oral groups was less marked after 150 minutes but the intravenous preparation gave higher plasma concentrations throughout the study period. It can be concluded that paracetamol gives more reliable therapeutic plasma concentrations when given intravenously.
Noren, Shawn R.; Udevitz, Mark S.; Triggs, Lisa; Paschke, Jessa; Oland, Lisa; Jay, Chadwick V.
2015-01-01
Pacific walruses may be unable to meet caloric requirements in the changing Arctic ecosystem, which could affect body condition and have population-level consequences. Body condition has historically been monitored by measuring blubber thickness over the xiphoid process (sternum). This may be an unreliable condition index because blubber at other sites along the body may be preferentially targeted to balance energetic demands. Animals in aquaria provided an opportunity for controlled study of how blubber topography is altered by caloric intake. Morphology, body mass, blubber thickness (21 sites), and caloric intake of five mature, nonpregnant, nonlactating female walruses were measured monthly (12 month minimum). Body condition (mass × standard length−1) was described by a model that included caloric intake and a seasonal effect, and scaled positively with estimates of total blubber mass. Blubber thicknesses (1.91–10.69 cm) varied topographically and were similar to values reported for free-ranging female walruses. Body condition was most closely related to blubber thickness measured dorsomedially in the region of the anterior insertion of the pectoral flippers (shoulders); sternum blubber thickness was a relatively poor indicator of condition. This study demonstrates the importance of validating condition metrics before using them to monitor free-ranging populations.
An Integrated Framework for Analysis of Water Supply Strategies in a Developing City: Chennai, India
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Gorelick, S.; Goulder, L.
2009-12-01
Indian cities are facing a severe water crisis: rapidly growing population, low tariffs, high leakage rates, inadequate reservoir storage, are straining water supply systems, resulting in unreliable, intermittent piped supply. Conventional approaches to studying the problem of urban water supply have typically considered only centralized piped supply by the water utility. Specifically, they have tended to overlook decentralized actions by consumers such as groundwater extraction via private wells and aquifer recharge by rainwater harvesting. We present an innovative integrative framework for analyzing urban water supply in Indian cities. The framework is used in a systems model of water supply in the city of Chennai, India that integrates different components of the urban water system: water flows into the reservoir system, diversion and distribution by the public water utility, groundwater flow in the urban aquifer, informal water markets and consumer behavior. Historical system behavior from 2002-2006 is used to calibrate the model. The historical system behavior highlights the buffering role of the urban aquifer; storing water in periods of surplus for extraction by consumers via private wells. The model results show that in Chennai, distribution pipeline leaks result in the transfer of water from the inadequate reservoir system to the urban aquifer. The systems approach also makes it possible to evaluate and compare a wide range of centralized and decentralized policies. Three very different policies: Supply Augmentation (desalination), Efficiency Improvement (raising tariffs and fixing pipe leaks), and Rainwater Harvesting (recharging the urban aquifer by capturing rooftop and yard runoff) were evaluated using the model. The model results suggest that a combination of Rainwater Harvesting and Efficiency Improvement best meets our criteria of welfare maximization, equity, system reliability, and utility profitability. Importantly, the study shows that combination policy emerges as optimal because of three conditions that are prevalent in Chennai: 1) widespread presence of private wells, 2) inadequate availability of reservoir storage to the utility, and 2) high cost of new supply sources.
Mkoka, Dickson Ally; Goicolea, Isabel; Kiwara, Angwara; Mwangu, Mughwira; Hurtig, Anna-Karin
2014-03-19
Provision of quality emergency obstetric care relies upon the presence of skilled health attendants working in an environment where drugs and medical supplies are available when needed and in adequate quantity and of assured quality. This study aimed to describe the experience of rural health facility managers in ensuring the timely availability of drugs and medical supplies for emergency obstetric care (EmOC). In-depth interviews were conducted with a total of 17 health facility managers: 14 from dispensaries and three from health centers. Two members of the Council Health Management Team and one member of the Council Health Service Board were also interviewed. A survey of health facilities was conducted to supplement the data. All the materials were analysed using a qualitative thematic analysis approach. Participants reported on the unreliability of obtaining drugs and medical supplies for EmOC; this was supported by the absence of essential items observed during the facility survey. The unreliability of obtaining drugs and medical supplies was reported to result in the provision of untimely and suboptimal EmOC services. An insufficient budget for drugs from central government, lack of accountability within the supply system and a bureaucratic process of accessing the locally mobilized drug fund were reported to contribute to the current situation. The unreliability of obtaining drugs and medical supplies compromises the timely provision of quality EmOC. Multiple approaches should be used to address challenges within the health system that prevent access to essential drugs and supplies for maternal health. There should be a special focus on improving the governance of the drug delivery system so that it promotes the accountability of key players, transparency in the handling of information and drug funds, and the participation of key stakeholders in decision making over the allocation of locally collected drug funds.
2014-01-01
Background Provision of quality emergency obstetric care relies upon the presence of skilled health attendants working in an environment where drugs and medical supplies are available when needed and in adequate quantity and of assured quality. This study aimed to describe the experience of rural health facility managers in ensuring the timely availability of drugs and medical supplies for emergency obstetric care (EmOC). Methods In-depth interviews were conducted with a total of 17 health facility managers: 14 from dispensaries and three from health centers. Two members of the Council Health Management Team and one member of the Council Health Service Board were also interviewed. A survey of health facilities was conducted to supplement the data. All the materials were analysed using a qualitative thematic analysis approach. Results Participants reported on the unreliability of obtaining drugs and medical supplies for EmOC; this was supported by the absence of essential items observed during the facility survey. The unreliability of obtaining drugs and medical supplies was reported to result in the provision of untimely and suboptimal EmOC services. An insufficient budget for drugs from central government, lack of accountability within the supply system and a bureaucratic process of accessing the locally mobilized drug fund were reported to contribute to the current situation. Conclusion The unreliability of obtaining drugs and medical supplies compromises the timely provision of quality EmOC. Multiple approaches should be used to address challenges within the health system that prevent access to essential drugs and supplies for maternal health. There should be a special focus on improving the governance of the drug delivery system so that it promotes the accountability of key players, transparency in the handling of information and drug funds, and the participation of key stakeholders in decision making over the allocation of locally collected drug funds. PMID:24646098
Evaluation of the reliability of maize reference assays for GMO quantification.
Papazova, Nina; Zhang, David; Gruden, Kristina; Vojvoda, Jana; Yang, Litao; Buh Gasparic, Meti; Blejec, Andrej; Fouilloux, Stephane; De Loose, Marc; Taverniers, Isabel
2010-03-01
A reliable PCR reference assay for relative genetically modified organism (GMO) quantification must be specific for the target taxon and amplify uniformly along the commercialised varieties within the considered taxon. Different reference assays for maize (Zea mays L.) are used in official methods for GMO quantification. In this study, we evaluated the reliability of eight existing maize reference assays, four of which are used in combination with an event-specific polymerase chain reaction (PCR) assay validated and published by the Community Reference Laboratory (CRL). We analysed the nucleotide sequence variation in the target genomic regions in a broad range of transgenic and conventional varieties and lines: MON 810 varieties cultivated in Spain and conventional varieties from various geographical origins and breeding history. In addition, the reliability of the assays was evaluated based on their PCR amplification performance. A single base pair substitution, corresponding to a single nucleotide polymorphism (SNP) reported in an earlier study, was observed in the forward primer of one of the studied alcohol dehydrogenase 1 (Adh1) (70) assays in a large number of varieties. The SNP presence is consistent with a poor PCR performance observed for this assay along the tested varieties. The obtained data show that the Adh1 (70) assay used in the official CRL NK603 assay is unreliable. Based on our results from both the nucleotide stability study and the PCR performance test, we can conclude that the Adh1 (136) reference assay (T25 and Bt11 assays) as well as the tested high mobility group protein gene assay, which also form parts of CRL methods for quantification, are highly reliable. Despite the observed uniformity in the nucleotide sequence of the invertase gene assay, the PCR performance test reveals that this target sequence might occur in more than one copy. Finally, although currently not forming a part of official quantification methods, zein and SSIIb assays are found to be highly reliable in terms of nucleotide stability and PCR performance and are proposed as good alternative targets for a reference assay for maize.
Novel approach for simultaneous wireless transmission and evaluation of optical sensors
NASA Astrophysics Data System (ADS)
Neumann, Niels; Schuster, Tobias; Plettemeier, Dirk
2014-11-01
Optical sensors can be used to measure various quantities such as pressure, strain, temperature, refractive index, pH value and biochemical reactions. The interrogation of the sensor can be performed spectrally or using a simple power measurement. However, the evaluation of the sensor signal and the subsequent radio transmission of the results is complicated and costly. A sophisticated system setup comprising a huge number of electrooptical components as well as a complete radio module is required. This is not only expensive and unreliable but also impractical within harsh environment, in limited space and in inaccessible areas. Radio-over-Fiber (RoF) technology implies signals modulated on an electrical carrier being transmitted over fiber by using optical carriers. Combining RoF techniques and optical sensors, a new class of measurement devices readable by a radio interfaces is introduced in this paper. These sensors use a modulated input signal generated by a RoF transmitter that { after being influenced by the optical sensor-is directly converted into a radio signal and transmitted. This approach enables remote read-outs of the sensor by means of wireless evaluation. Thus, costly, voluminous, power hungry and sensitive equipment in the vicinity of the measurement location is avoided. The equipment can be concentrated in a central location supporting existing radio transmission schemes (e.g. WiFi).
Loffing, Florian; Nickel, Stefanie; Hagemann, Norbert
2017-01-01
Left-to-right readers are assumed to demonstrate a left-to-right bias in aesthetic preferences and performance evaluation. Here we tested the hypothesis that such bias occurs in left-to-right reading laypeople and gymnastic judges (n = 48 each) when asked to select the more beautiful image from a picture pair showing gymnastic or non-gymnastic actions (Experiment 1) and to evaluate videos of gymnasts’ balance beam performances (Experiment 2). Overall, laypeople demonstrated a stronger left-to-right bias than judges. Unlike judges, laypeople rated images with left-to-right trajectory as more beautiful than content-wise identical images with right-to-left trajectory (Experiment 1). Also, laypeople tended to award slightly more points to videos showing left-to-right as opposed to right-to-left oriented actions (Experiment 2); however, in contrast to initial predictions the effect was weak and statistically unreliable. Collectively, judges, when considered as a group, seem less prone to directional bias than laypeople, thus tentatively suggesting that directionality may be an issue for unskilled but not for skilled judging. Possible mechanisms underlying the skill effect in Experiment 1 and the absence of clear bias in Experiment 2 are discussed alongside propositions for a broadening of perspectives in future research. PMID:29259568
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-01-01
Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-10-01
In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.
Static ankle joint equinus: toward a standard definition and diagnosis.
Charles, James; Scutter, Sheila D; Buckley, Jonathan
2010-01-01
Equinus is characterized by reduced dorsiflexion of the ankle joint, but there is a lack of consensus regarding criteria for definition and diagnosis. This review examines the literature relating to the definition, assessment, diagnosis, prevalence, and complications of equinus. Articles on equinus and assessment of ankle joint range of motion were identified by searching the EMBASE, Medline, PubMed, EBSCOhost, Cinahl, and Cochrane databases and by examining the reference lists of the articles found. There is inconsistency regarding the magnitude of reduction in dorsiflexion required to constitute a diagnosis of equinus and no standard method for assessment; hence, the prevalence of equinus is unknown. Goniometric assessment of ankle joint range of motion was shown to be unreliable, whereas purpose-built tools demonstrated good reliability. Reduced dorsiflexion is associated with alterations in gait, increased forefoot pressure, and ankle injury, the magnitude of reduction in range of motion required to predispose to foot or lower-limb abnormalities is not known. In the absence of definitive data, we propose a two-stage definition of equinus: the first stage would reflect dorsiflexion of less than 10 degrees with minor compensation and a minor increase in forefoot pressure, and the second stage would reflect dorsiflexion of less than 5 degrees with major compensation and a major increase in forefoot pressure. This proposed definition of equinus will assist with standardizing the diagnosis and will provide a basis for future studies of the prevalence, causes, and complications of this condition.
Gardiner, Riana Zanarivero; Doran, Erik; Strickland, Kasha; Carpenter-Bundhoo, Luke; Frère, Celine
2014-01-01
Ectothermic vertebrates face many challenges of thermoregulation. Many species rely on behavioral thermoregulation and move within their landscape to maintain homeostasis. Understanding the fine-scale nature of this regulation through tracking techniques can provide a better understanding of the relationships between such species and their dynamic environments. The use of animal tracking and telemetry technology has allowed the extensive collection of such data which has enabled us to better understand the ways animals move within their landscape. However, such technologies do not come without certain costs: they are generally invasive, relatively expensive, can be too heavy for small sized animals and unreliable in certain habitats. This study provides a cost-effective and non-invasive method through photo-identification, to determine fine scale movements of individuals. With our methodology, we have been able to find that male eastern water dragons (Intellagama leuseurii) have home ranges one and a half times larger than those of females. Furthermore, we found intraspecific differences in the size of home ranges depending on the time of the day. Lastly, we found that location mostly influenced females’ home ranges, but not males and discuss why this may be so. Overall, we provide valuable information regarding the ecology of the eastern water dragon, but most importantly demonstrate that non-invasive photo-identification can be successfully applied to the study of reptiles. PMID:24835073
NASA Astrophysics Data System (ADS)
Longinelli, Antonio; Wierzbowski, Hubert; Di Matteo, Antonella
2003-04-01
The oxygen isotopic composition of coexisting carbonate and phosphate from belemnite rostra was measured according to well established techniques in 42 samples of Early and Middle Jurassic age and in five samples of oyster shells. Most of the samples come from various locations in the Western Carpathians of Slovakia and Ukraine, and from central Poland. Three samples come from the Isle of Skye. The phosphate content of belemnite rostra, though variable, is systematically very low: consistently lower than about 0.3%. However, this phosphate concentration is close to that found in shells of modern marine organisms including pelecypods, gastropods and Sepia cuttlebones which, in some way, could be considered the modern belemnite counterpart. The measured oxygen isotopic composition of carbonate is within the normal range of values obtained from these fossils ranging from about -1.3 to about +0.6‰ (PDB-1) with the exception of three samples; the δ 13C values range from about -0.8 to about +2.8‰ (PDB-1). With the single exception of one sample from the Isle of Skye, the oxygen isotopic composition of phosphate from belemnite rostra ranges from +19.8 to +24.9‰ (V-SMOW), 22 of the samples measured showing δ 18O values equal to or heavier than +23.0‰. In contrast, the oyster values are considerably lighter, in the case of both carbonate and phosphate. 18O-enriched values can hardly be related to diagenetic processes that normally cause an oxygen isotope shift towards light values. If deposition temperatures are calculated from the heavily enriched values by means of the equation of Longinelli and Nuti [Earth Planet. Sci. Lett. 19 (1973) 373-376] and assuming the δ 18O of Jurassic ocean water to be equal to -1‰ taking into account the lack of ice caps during the Jurassic, the obtained temperatures range from about 8°C to about zero. These temperatures are obviously unreliable when Mesozoic palaeoceanographic conditions and palaeoclimate are taken into account. Two different hypotheses are suggested to explain these results, other hypotheses being rejected as unreliable. (1) Phosphate derived from the decaying organic matter of belemnites might have been introduced into belemnite rostra by early diagenetic fluids. If the phosphate of belemnite organic matter was isotopically heavy as happens nowadays in the flesh of molluscs, the inflow of this phosphate into the rostra could be responsible for the very positive δ 18O values shown by many belemnite rostra (this hypothesis is suggested by H.W.); (2) previous oxygen isotope measurements on Upper Cretaceous belemnites yielded δ 18O values very close to the most positive values obtained from Lower Tertiary pelecypods and fish teeth which are known to precipitate their phosphate under isotopic equilibrium conditions with seawater. These data suggest the possibility that the phosphate in belemnite rostra is primary phosphate so that the very positive data reported here can be considered the result of good preservation of the pristine isotopic composition of primary phosphate. Consequently, the only way to explain the very positive δ 18O values is to consider the oxygen isotopic composition of Jurassic ocean water to be more positive than nowadays by at least 3‰. This hypothesis is suggested by A.L. and A.D.M.
Which test for CAD should be used in patients with left bundle branch block?
Xu, Bo; Cremer, Paul; Jaber, Wael; Moir, Stuart; Harb, Serge C; Rodriguez, L Leonardo
2018-03-01
Exercise stress electrocardiography is unreliable as a test for obstructive coronary artery disease (CAD) if the patient has left bundle branch block. The authors provide an algorithm for using alternative tests: exercise stress echocardiography, dobutamine echocardiography, computed tomographic (CT) angiography, and nuclear myocardial perfusion imaging. Copyright © 2018 Cleveland Clinic.
Our Vocational Training Can Guarantee You the Job of a Lifetime. Consumer Bulletin No. 13.
ERIC Educational Resources Information Center
Federal Trade Commission, Washington, DC. Bureau of Consumer Protection.
This guidebook cautions the potential vocational school student about the possibilities of false claims, poor training, and unreliable job promises from commercial trade, technical business, and correspondence schools. It points out what sort of things to look for and which claims to take seriously. Defenses against an aggressive sales pitch are…
In Search of Truth, on the Internet
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
Is it true? There's no more important question to ask when online. Truth telling has never been a requirement to provide information online. Standards for accuracy, to a large extent, don't exist. As a general rule, the "real time" communication that takes place in instant messaging sessions and chat rooms is the most unreliable. One level up in…
ERIC Educational Resources Information Center
Vacha-Haase, Tammi; Kogan, Lori R.; Tani, Crystal R.; Woodall, Renee A.
2001-01-01
Used reliability generalization to explore the variance of scores on 10 Minnesota Multiphasic Personality Inventory (MMPI) clinical scales drawing on 1,972 articles in the literature on the MMPI. Results highlight the premise that scores, not tests, are reliable or unreliable, and they show that study characteristics do influence scores on the…
Solar Electricity Generation: Issues of Development and Impact on ICT Implementation in Africa
ERIC Educational Resources Information Center
Damasen, Ikwaba Paul
2013-01-01
Purpose: The purpose of this paper is to examine and discuss, in-depth, how solar electricity can be developed and used to tackle grid electricity-related problems in African countries suffering from unreliable and inadequate grid electricity. Design/methodology/approach: The paper discusses in depth the current status of grid electricity in…
What's the Value of VAM (Value-Added Modeling)?
ERIC Educational Resources Information Center
Scherrer, Jimmy
2012-01-01
The use of value-added modeling (VAM) in school accountability is expanding, but deciding how to embrace VAM is difficult. Various experts say it's too unreliable, causes more harm than good, and has a big margin for error. Others assert VAM is imperfect but useful, and provides valuable feedback. A closer look at the models, and their use,…
I Wish I Could Believe You: The Frustrating Unreliability of Some Assessment Research
ERIC Educational Resources Information Center
Hunt, Tim; Jordan, Sally
2016-01-01
Many practitioner researchers strive to understand which assessment practices have the best impact on learning, but in authentic educational settings, it can be difficult to determine whether one intervention, for example the introduction of an online quiz to a course studied by diverse students, is responsible for the observed effect. This paper…
ERIC Educational Resources Information Center
Calam, John, Ed.
Alex Lord, a pioneer inspector of rural British Columbia (Canada) schools, shares in these recollections of his experiences in a province barely out of the stagecoach era. Traveling through vast northern territory, using unreliable transportation, and enduring climate extremes, Lord became familiar with the aspirations of remote communities and…
Random function theory revisited - Exact solutions versus the first order smoothing conjecture
NASA Technical Reports Server (NTRS)
Lerche, I.; Parker, E. N.
1975-01-01
We remark again that the mathematical conjecture known as first order smoothing or the quasi-linear approximation does not give the correct dependence on correlation length (time) in many cases, although it gives the correct limit as the correlation length (time) goes to zero. In this sense, then, the method is unreliable.
ERIC Educational Resources Information Center
Rakhlin, Natalia; Kornilov, Sergey A.; Reich, Jodi; Grigorenko, Elena L.
2015-01-01
We examined anaphora resolution in children with and without Developmental Language Disorder (DLD) to clarify whether (i) DLD is best understood as missing knowledge of certain linguistic operations/elements or as unreliable performance and (ii) if comprehension of sentences with anaphoric expressions as objects and exceptionally case marked (ECM)…
(Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits
ERIC Educational Resources Information Center
Smith, Mark
2006-01-01
Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…
Applications of the Dot Probe Task in Attentional Bias Research in Eating Disorders: A Review
ERIC Educational Resources Information Center
Starzomska, Malgorzata
2017-01-01
Recent years have seen an increasing interest in the cognitive approach to eating disorders, which postulates that patients selectively attend to information associated with eating, body shape, and body weight. The unreliability of self-report measures in eating disorders due to strong denial of illness gave rise to experimental studies inspired…
The Unreliability of Data in the California Community College System. AIR Forum 1979 Paper.
ERIC Educational Resources Information Center
Turner, John D.; Booth, Mary W.
A chronicle of the problems faced in an attempt to collect data on sociology curriculum trends in California's community college system is presented. The project was initiated in an effort to determine if other colleges in the system were experiencing the same difficulties with curriculum and enrollment in sociology courses being encountered by…
ERIC Educational Resources Information Center
Wall, Andrew; Frost, Robert; Smith, Ryan; Keeling, Richard
2008-01-01
Although datasets such as the Integrated Postsecondary Data System are available as inputs to higher education funding formulas, these datasets can be unreliable, incomplete, or unresponsive to criteria identified by state education officials. State formulas do not always match the state's economic and human capital goals. This article analyzes…
USDA-ARS?s Scientific Manuscript database
While seed harvested from remnant stands of grass can be used for restoration in temperate regions, seed recovery in semi-arid and arid environments is often unreliable and of low yield and quality. In addition, ongoing harvest of indigenous populations can be unsustainable, especially for those th...
Variations in Canonical Star-Forming Laws at Low Metallicity
NASA Astrophysics Data System (ADS)
Monkiewicz, Jacqueline; Bowman, Judd D.; Scowen, Paul
2018-01-01
Empirically-determined star formation relations link observed galaxy luminosities to extrapolated star formation rates at almost every observable wavelength range. These laws are a cornerstone of extragalactic astronomy, and will be critically important for interpreting upcoming observations of early high-redshift protogalaxies with JWST and WFIRST. There are indications at a variety of wavelengths that these canonical relations may become unreliable at the lowest metallicities observed. This potentially complicates interpretation of the earliest protogalaxies, which are expected to be pristine and largely unenriched by stellar nucleosynthesis. Using a sample of 15 local dwarf galaxies with 12+[O/H] < 8.2, I focus on two of these relations: the far-infrared/radio relation and the H-alpha/ultraviolet relation. The sample is chosen to have pre-existing far-IR and UV observations, and to span the full spread of the galaxy mass-metallicity relationship at low luminosity, so that luminosity and metallicity may be examined separately. Radio continuum observations of low metallicity dwarf galaxies 1 Zw 18 and SBS 0335-052E suggest that the far-IR/radio relation probably deviates at low metallicities, but the low luminosity end of the relation is not well sampled. The upgraded Jansky Very Large Array has the sensitivity to fill in this gap. I have obtained 45 hours of L- and C-band continuum data of my dwarf galaxy sample. I present radio continuum imaging of an initial sub-sample of Local Group dwarfs, some of which have never before been detected in radio continuum. The H-alpha/UV relationship is likewise known to become unreliable for dwarf galaxies, though this has been attributed to dwarf galaxy "bursty-ness" rather than metallicity effects. I have conducted a parallel survey of emission line imaging to study the underlying astrophysics of the H-alpha/UV relation. Using Balmer decrement imaging, I map out the pixel-to-pixel dust distribution and geometry within the nearest galaxies in my sample. I compare this to GALEX UV imaging. I discuss implications for UV escape fraction, and present initial results of the canonical star-forming relations at low galaxy luminosity and metallicity. THIS IS A POSTER AND WILL BE LOCATED IN THE AAS BOOTH.
Restier, Lioara; Duclos, Antoine; Jarri, Laura; Touzet, Sandrine; Denis, Angelique; Occelli, Pauline; Kassai-Koupai, Behrouz; Lachaux, Alain; Loras-Duclaux, Irene; Colin, Cyrille; Peretti, Noel
2015-10-01
Malnutrition screening is essential to detect and to treat patients with stunting or wasting. The aim was to evaluate the subjective perception of frequency and assessment of malnutrition by health care professionals. In a paediatric university hospital, a cross-sectional survey was conducted with a Likert scale approach to health care professionals and compared with objective measurements on a given day of frequency of malnutrition and of its screening. 279 health care professionals participated. The malnutrition rate, estimated versus measured, was 16.8% and 34.8%, respectively. Conversely, the estimated frequency of malnutrition screening versus measured frequency was 80.6% versus 43.1%, respectively. Furthermore, the perception of health care professionals did not differ depending on their professional category or speciality. In conclusion, health care staff underestimates the prevalence of malnutrition in children by half and overestimates the frequency of appropriate screening practices for detection of malnutrition. This flawed/unreliable perception may disrupt both screening and the management of malnourished children. There is an urgent need to find out the reasons behind these errors caused by subjective perception in order to develop appropriate educational training to remedy the situation. © 2015 John Wiley & Sons, Ltd.
Inconsistent identification of pit bull-type dogs by shelter staff.
Olson, K R; Levy, J K; Norby, B; Crandall, M M; Broadhurst, J E; Jacks, S; Barton, R C; Zimmerman, M S
2015-11-01
Shelter staff and veterinarians routinely make subjective dog breed identification based on appearance, but their accuracy regarding pit bull-type breeds is unknown. The purpose of this study was to measure agreement among shelter staff in assigning pit bull-type breed designations to shelter dogs and to compare breed assignments with DNA breed signatures. In this prospective cross-sectional study, four staff members at each of four different shelters recorded their suspected breed(s) for 30 dogs; there was a total of 16 breed assessors and 120 dogs. The terms American pit bull terrier, American Staffordshire terrier, Staffordshire bull terrier, pit bull, and their mixes were included in the study definition of 'pit bull-type breeds.' Using visual identification only, the median inter-observer agreements and kappa values in pair-wise comparisons of each of the staff breed assignments for pit bull-type breed vs. not pit bull-type breed ranged from 76% to 83% and from 0.44 to 0.52 (moderate agreement), respectively. Whole blood was submitted to a commercial DNA testing laboratory for breed identification. Whereas DNA breed signatures identified only 25 dogs (21%) as pit bull-type, shelter staff collectively identified 62 (52%) dogs as pit bull-type. Agreement between visual and DNA-based breed assignments varied among individuals, with sensitivity for pit bull-type identification ranging from 33% to 75% and specificity ranging from 52% to 100%. The median kappa value for inter-observer agreement with DNA results at each shelter ranged from 0.1 to 0.48 (poor to moderate). Lack of consistency among shelter staff indicated that visual identification of pit bull-type dogs was unreliable. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Evaluating the care of general medicine inpatients: how good is implicit review?
Hayward, R A; McMahon, L F; Bernard, A M
1993-04-01
Peer review often consists of implicit evaluations by physician reviewers of the quality and appropriateness of care. This study evaluated the ability of implicit review to measure reliably various aspects of care on a general medicine inpatient service. Retrospective review of patients' charts, using structured implicit review, of a stratified random sample of consecutive admissions to a general medicine ward. A university teaching hospital. Twelve internists were trained in structured implicit review and reviewed 675 patient admissions (with 20% duplicate reviews for a total of 846 reviews). Although inter-rater reliabilities for assessments of overall quality of care and preventable deaths (kappa = 0.5) were adequate for aggregate comparisons (for example, comparing mean ratings on two hospital wards), they were inadequate for reliable evaluations of single patients using one or two reviewers. Reviewers' agreement about most focused quality problems (for example, timeliness of diagnostic evaluation and clinical readiness at time of discharge) and about the appropriateness of hospital ancillary resource use was poor (kappa < or = 0.2). For most focused implicit measures, bias due to specific reviewers who were systematically more harsh or lenient (particularly for evaluation of resource-use appropriateness) accounted for much of the variation in reviewers' assessments, but this was not a substantial problem for the measure of overall quality. Reviewers rarely reported being unable to evaluate the quality of care because of deficiencies in documentation in the patient's chart. For assessment of overall quality and preventable deaths of general medicine inpatients, implicit review by peers had moderate degrees of reliability, but for most other specific aspects of care, physician reviewers could not agree. Implicit review was particularly unreliable at evaluating the appropriateness of hospital resource use and the patient's readiness for discharge, two areas where this type of review is often used.
ERIC Educational Resources Information Center
Weiss, Michael J.; May, Henry
2012-01-01
As test-based educational accountability has moved to the forefront of national and state education policy, so has the desire for better measures of school performance. No Child Left Behind's (NCLB) status and safe harbor measures have been criticized for being unfair and unreliable, respectively. In response to such criticism, in 2005 the federal…
USDA-ARS?s Scientific Manuscript database
Natural pond spawning of channel catfish is unreliable, unpredictable, and is dependent on environmental conditions. Male and female broodfish are typically held in the same pond for 2 or 3 years. Approximately 30-50% of the females and 10 percent of the males present in the pond participate in the...
Efficient Byzantine Fault Tolerance for Scalable Storage and Services
2009-07-01
most critical applications must survive in ever harsher environments. Less synchronous networking delivers packets unreliably and unpredictably, and... synchronous environments to allowing asynchrony, and from tolerating crashes to tolerating some corruptions through ad-hoc consistency checks. Ad-hoc...servers are responsive. To support this thesis statement, this disseration takes the following steps. First, it develops a new cryptographic primitive
ERIC Educational Resources Information Center
Leming, Katie P.
2016-01-01
Previous qualitative research on educational practices designed to improve critical thinking has relied on anecdotal or student self-reports of gains in critical thinking. Unfortunately, student self-report data have been found to be unreliable proxies for measuring critical thinking gains. Therefore, in the current interpretivist study, five…
USDA-ARS?s Scientific Manuscript database
In the past, several techniques have been developed as diagnostic tools for the differential diagnosis of tumours produced by Marek’s disease virus (MDV) from those induced by avian leukosis virus (ALV) and reticuloendotheliosis virus (REV). However, most current techniques are unreliable using form...
Army Science Board 2001 AD HOC Study Knowledge Management
2001-11-01
dissemination, Army, Army culture, information dominance , knowledge dominance, information sharing, situational awareness, network-centric, infosphere...proposed effort and the emerging Army ICT for Information Dominance are all excellent foundation efforts for KM and Information Assurance. The panel’s...level is critical to survivability and lethality. – Unreliable information will quickly reverse the advantages of “ Information Dominance ” essential to
The cost of software fault tolerance
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1982-01-01
The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.
ERIC Educational Resources Information Center
Halpin, Patricia A.
2016-01-01
Nonscience majors often rely on general internet searches to locate science information. This practice can lead to misconceptions because the returned search information can be unreliable. In this article the authors describe how they used the social media site Twitter to address this problem in a general education course, BSCI 421 Diseases of the…
In Defense of Print: A Manifesto of Stories
ERIC Educational Resources Information Center
Mathieu, Paula
2017-01-01
The unreliability of preserving writing in any form seems apt: it is a physical reminder of writing's shaky, uncertain power. Sometimes words can change the world, but more often, the stark realities of an unjust world can fail to bend to even the most beautifully chosen words. In the face of long odds, the impulse to write and share words, in any…
Leadership Characteristics 1900-1982.
1983-04-01
complex of factors associated with leadership status (Bass 1981), the entertainment of this premise by comtemporary researchers is viewed as unreliable...to direct their total energies in the formulation of a small business rather than support the missions of tradi- tional corporate or military...effort is directed toward the development and man- agement of personal sideline businesses . They eventually succeed in turning part-time sidelines
U.S. Navy Ships Food Service Divisions: Modernizing Inventory Management
2010-06-01
management procedures for receipt, inventory, stowage, and issue of provisions onboard ships have remained relatively unchanged for decades. Culinary ...improve the quality of life for Culinary Specialists 15. NUMBER OF PAGES 87 14. SUBJECT TERMS Inventory management, records keeper, stores onload...remained relatively unchanged for decades. Culinary Specialists are utilizing an antiquated and unreliable inventory management program (the Food
US Navy Ships Food Service Divisions: Modernizing Inventory Management
2010-05-31
relatively unchanged for decades. Culinary Specialists are utilizing an antiquated and unreliable inventory management program (the Food Management System...validities, reduce man-hours and improve the quality of life for Culinary Specialists). 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION...procedures for receipt, inventory, stowage, and issue of provisions onboard ships have remained relatively unchanged for decades. Culinary Specialists
Laboratory Assessment of Commercially Available Ultrasonic Rangefinders
2015-11-01
how the room was designed to prevent sound reflections (a combination of the wedges absorbing the waveforms and not having a flat wall ). When testing... sound booth at 0.5 m. ...................................................................................... 5 iv This page is intentionally...environments for sound measurements using a tape measure. This mapping method can be time- consuming and unreliable as objects frequently move around in
Effects of Analytical and Holistic Scoring Patterns on Scorer Reliability in Biology Essay Tests
ERIC Educational Resources Information Center
Ebuoh, Casmir N.
2018-01-01
Literature revealed that the patterns/methods of scoring essay tests had been criticized for not being reliable and this unreliability is more likely to be more in internal examinations than in the external examinations. The purpose of this study is to find out the effects of analytical and holistic scoring patterns on scorer reliability in…
Zectran fed orally to mice...cholinesterase levels in blood determined
Jean Marie Lang; Raymond R. Miskus
1967-01-01
Zectran, a carbamate insecticide, is being field-tested against the spruce budworm. Taken in sufficient quantity, it can induce cholinesterase (ChE) inhibition in mammals. In laboratory experiments, Zectran was fed orally to mice. Results indicated that maximum ChE inhibition occurred 15 to 30 minutes after ingestion of Zectran, and that a ChE test is unreliable in the...
Multidimensional Approach to the Development of a Mandarin Chinese-Oriented Sound Test
ERIC Educational Resources Information Center
Hung, Yu-Chen; Lin, Chun-Yi; Tsai, Li-Chiun; Lee, Ya-Jung
2016-01-01
Purpose: Because the Ling six-sound test is based on American English phonemes, it can yield unreliable results when administered to non-English speakers. In this study, we aimed to improve specifically the diagnostic palette for Mandarin Chinese users by developing an adapted version of the Ling six-sound test. Method: To determine the set of…
ERIC Educational Resources Information Center
West, Suzanne M.
2013-01-01
Course grades, which often include non-achievement factors such as effort and behavior and are subject to individual teacher grading philosophies, suffer from issues of unreliability. Yet, course grades continue to be utilized as a primary tool for reporting academic achievement to students and parents and are used by most colleges and…
Critiquing "Calypso": Authorial and Academic Bias in the Reading of a Young Adult Novel
ERIC Educational Resources Information Center
Butler, Catherine
2013-01-01
The position of authors of fiction in relation to critical discussion of their work is an unsettled one. While recognized as having knowledge and expertise regarding their texts, they are typically regarded as unreliable sources when it comes to critical analysis, and as partial witnesses whose personal association with the text is liable to…
ERIC Educational Resources Information Center
Foorman, Barbara R.; Petscher, Yaacov; Stanley, Christopher
2016-01-01
The idea of targeting reading instruction to profiles of students' strengths and weaknesses in component skills is central to teaching. However, these profiles are often based on unreliable descriptions of students' oral reading errors, text reading levels, or learning profiles. This research utilized latent profile analysis (LPA) to examine…
Perceived Benefits and Barriers to the Use of High-Speed Broadband in Ireland's Second-Level Schools
ERIC Educational Resources Information Center
Coyne, Bryan; Devitt, Niamh; Lyons, Seán; McCoy, Selina
2015-01-01
As part of Ireland's National Digital Strategy, high-speed broadband is being rolled out to all second-level schools to support greater use of information and communication technology (ICT) in education. This programme signals a move from slow and unreliable broadband connections for many schools to a guaranteed high-speed connection with…
A Simple Equation to Predict a Subscore's Value
ERIC Educational Resources Information Center
Feinberg, Richard A.; Wainer, Howard
2014-01-01
Subscores are often used to indicate test-takers' relative strengths and weaknesses and so help focus remediation. But a subscore is not worth reporting if it is too unreliable to believe or if it contains no information that is not already contained in the total score. It is possible, through the use of a simple linear equation provided in…
ERIC Educational Resources Information Center
McBride, Catherine Alexandra
2016-01-01
Some aspects of Chinese literacy development do not conform to patterns of literacy development in alphabetic orthographies. Four are highlighted here. First, semantic radicals are one aspect of Chinese characters that have no analogy to alphabetic orthographies. Second, the unreliability of phonological cues in Chinese along with the fact that…
Comparison of Attachment theory and Cognitive-Motivational Structure theory.
Malerstein, A J
2005-01-01
Attachment theory and Cognitive-Motivational Structure (CMS) are similar in most respects. They differ primarily in their proposal of when, during development, one's sense of the self and of the outside world are formed. I propose that the theories supplement each other after about age seven years--when Attachment theory's predictions of social function become unreliable, CMS theory comes into play.
Spatiotemporal access model based on reputation for the sensing layer of the IoT.
Guo, Yunchuan; Yin, Lihua; Li, Chao; Qian, Junyan
2014-01-01
Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model.
Aptamer-Mediated Delivery and Cell-Targeting Aptamers: Room for Improvement.
Yan, Amy C; Levy, Matthew
2018-06-01
Targeting cells with aptamers for the delivery of therapeutic cargoes, in particular oligonucleotides, represents one of the most exciting applications of the aptamer field. Perhaps nowhere has there been more excitement in the field than around the targeted delivery of siRNA or miRNA. However, when industry leaders in the field of siRNA delivery have tried to recapitulate aptamer-siRNA delivery results, they have failed. This problem stems from more than just the age-old problem of delivery to the cytoplasm, a challenge that has stymied the targeted delivery of therapeutic oligonucleotides since its inception. With aptamers, the problem is compounded further by the fact that many aptamers simply do not function as reported. This is distressing, as clearly, all published aptamers should be able to function as described. However, it is often challenging to recognize the details that might flag an unreliable aptamer from a viable one. As such, unreliable aptamers continue to be peer reviewed and published. We need to raise the bar and level of rigor in the field. Only then can we think about taking advantage of the unique attributes of these molecules and address the issues associated with their use as agents for targeted delivery.
Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran
2017-01-01
Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance. PMID:28783049
Cooperation Survives and Cheating Pays in a Dynamic Network Structure with Unreliable Reputation
NASA Astrophysics Data System (ADS)
Antonioni, Alberto; Sánchez, Angel; Tomassini, Marco
2016-06-01
In a networked society like ours, reputation is an indispensable tool to guide decisions about social or economic interactions with individuals otherwise unknown. Usually, information about prospective counterparts is incomplete, often being limited to an average success rate. Uncertainty on reputation is further increased by fraud, which is increasingly becoming a cause of concern. To address these issues, we have designed an experiment based on the Prisoner’s Dilemma as a model for social interactions. Participants could spend money to have their observable cooperativeness increased. We find that the aggregate cooperation level is practically unchanged, i.e., global behavior does not seem to be affected by unreliable reputations. However, at the individual level we find two distinct types of behavior, one of reliable subjects and one of cheaters, where the latter artificially fake their reputation in almost every interaction. Cheaters end up being better off than honest individuals, who not only keep their true reputation but are also more cooperative. In practice, this results in honest subjects paying the costs of fraud as cheaters earn the same as in a truthful environment. These findings point to the importance of ensuring the truthfulness of reputation for a more equitable and fair society.
Rodríguez-Molina, Jesús; Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran
2017-08-05
Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.
Distributed reconfigurable control strategies for switching topology networked multi-agent systems.
Gallehdari, Z; Meskin, N; Khorasani, K
2017-11-01
In this paper, distributed control reconfiguration strategies for directed switching topology networked multi-agent systems are developed and investigated. The proposed control strategies are invoked when the agents are subject to actuator faults and while the available fault detection and isolation (FDI) modules provide inaccurate and unreliable information on the estimation of faults severities. Our proposed strategies will ensure that the agents reach a consensus while an upper bound on the team performance index is ensured and satisfied. Three types of actuator faults are considered, namely: the loss of effectiveness fault, the outage fault, and the stuck fault. By utilizing quadratic and convex hull (composite) Lyapunov functions, two cooperative and distributed recovery strategies are designed and provided to select the gains of the proposed control laws such that the team objectives are guaranteed. Our proposed reconfigurable control laws are applied to a team of autonomous underwater vehicles (AUVs) under directed switching topologies and subject to simultaneous actuator faults. Simulation results demonstrate the effectiveness of our proposed distributed reconfiguration control laws in compensating for the effects of sudden actuator faults and subject to fault diagnosis module uncertainties and unreliabilities. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Great apes are sensitive to prior reliability of an informant in a gaze following task.
Schmid, Benjamin; Karg, Katja; Perner, Josef; Tomasello, Michael
2017-01-01
Social animals frequently rely on information from other individuals. This can be costly in case the other individual is mistaken or even deceptive. Human infants below 4 years of age show proficiency in their reliance on differently reliable informants. They can infer the reliability of an informant from few interactions and use that assessment in later interactions with the same informant in a different context. To explore whether great apes share that ability, in our study we confronted great apes with a reliable or unreliable informant in an object choice task, to see whether that would in a subsequent task affect their gaze following behaviour in response to the same informant. In our study, prior reliability of the informant and habituation during the gaze following task affected both great apes' automatic gaze following response and their more deliberate response of gaze following behind barriers. As habituation is very context specific, it is unlikely that habituation in the reliability task affected the gaze following task. Rather it seems that apes employ a reliability tracking strategy that results in a general avoidance of additional information from an unreliable informant.
Han, Kihwan; Kwon, Hyuk Joon; Choi, Tae Hyun; Kim, Jun Hyung; Son, Daegu
2010-03-01
The aim of this study was to standardize clinical photogrammetric techniques, and to compare anthropometry with photogrammetry. To standardize clinical photography, we have developed a photographic cephalostat and chair. We investigated the repeatability of the standardized clinical photogrammetric technique. Then, with 40 landmarks, a total of 96 anthropometric measurement items was obtained from 100 Koreans. Ninety six photogrammetric measurements from the same subjects were also obtained from standardized clinical photographs using Adobe Photoshop version 7.0 (Adobe Systems Corporation, San Jose, CA, USA). The photogrammetric and anthropometric measurement data (mm, degree) were then compared. A coefficient was obtained by dividing the anthropometric measurements by the photogrammetric measurements. The repeatability of the standardized photography was statistically significantly high (p=0.463). Among the 96 measurement items, 44 items were reliable; for these items the photogrammetric measurements were not different to the anthropometric measurements. The remaining 52 items must be classified as unreliable. By developing a photographic cephalostat and chair, we have standardized clinical photogrammetric techniques. The reliable set of measurement items can be used as anthropometric measurements. For unreliable measurement items, applying a suitable coefficient to the photogrammetric measurement allows the anthropometric measurement to be obtained indirectly.
Kidd, Celeste; Palmeri, Holly; Aslin, Richard N
2013-01-01
Children are notoriously bad at delaying gratification to achieve later, greater rewards (e.g., Piaget, 1970)-and some are worse at waiting than others. Individual differences in the ability-to-wait have been attributed to self-control, in part because of evidence that long-delayers are more successful in later life (e.g., Shoda, Mischel, & Peake, 1990). Here we provide evidence that, in addition to self-control, children's wait-times are modulated by an implicit, rational decision-making process that considers environmental reliability. We tested children (M=4;6, N=28) using a classic paradigm-the marshmallow task (Mischel, 1974)-in an environment demonstrated to be either unreliable or reliable. Children in the reliable condition waited significantly longer than those in the unreliable condition (p<0.0005), suggesting that children's wait-times reflected reasoned beliefs about whether waiting would ultimately pay off. Thus, wait-times on sustained delay-of-gratification tasks (e.g., the marshmallow task) may not only reflect differences in self-control abilities, but also beliefs about the stability of the world. Copyright © 2012 Elsevier B.V. All rights reserved.
Budischak, Sarah A; Hoberg, Eric P; Abrams, Art; Jolles, Anna E; Ezenwa, Vanessa O
2015-09-01
Most hosts are concurrently or sequentially infected with multiple parasites; thus, fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, noninvasive methods for obtaining quantitative, species-specific infection data in wildlife are often unreliable. Consequently, characterization of gastrointestinal nematode communities of wild hosts has largely relied on lethal sampling to isolate and enumerate adult worms directly from the tissues of dead hosts. The necessity of lethal sampling severely restricts the host species that can be studied, the adequacy of sample sizes to assess diversity, the geographic scope of collections and the research questions that can be addressed. Focusing on gastrointestinal nematodes of wild African buffalo, we evaluated whether accurate characterization of nematode communities could be made using a noninvasive technique that combined conventional parasitological approaches with molecular barcoding. To establish the reliability of this new method, we compared estimates of gastrointestinal nematode abundance, prevalence, richness and community composition derived from lethal sampling with estimates derived from our noninvasive approach. Our noninvasive technique accurately estimated total and species-specific worm abundances, as well as worm prevalence and community composition when compared to the lethal sampling method. Importantly, the rate of parasite species discovery was similar for both methods, and only a modest number of barcoded larvae (n = 10) were needed to capture key aspects of parasite community composition. Overall, this new noninvasive strategy offers numerous advantages over lethal sampling methods for studying nematode-host interactions in wildlife and can readily be applied to a range of study systems. © 2015 John Wiley & Sons Ltd.
Cramer, Bradley D.; Kleffner, Mark A.; Brett, Carlton E.; McLaughlin, P.I.; Jeppsson, Lennart; Munnecke, Axel; Samtleben, Christian
2010-01-01
The Wenlock Epoch of the Silurian Period has become one of the chronostratigraphically best-constrained intervals of the Paleozoic. The integration of multiple chronostratigraphic tools, such as conodont and graptolite biostratigraphy, sequence stratigraphy, and ??13Ccarb chemostratigraphy, has greatly improved global chronostratigraphic correlation and portions of the Wenlock can now be correlated with precision better than ??100kyr. Additionally, such detailed and integrated chronostratigraphy provides an opportunity to evaluate the fidelity of individual chronostratigraphic tools. Here, we use conodont biostratigraphy, sequence stratigraphy and carbon isotope (??13Ccarb) chemostratigraphy to demonstrate that the conodont Kockelella walliseri, an important guide fossil for middle and upper Sheinwoodian strata (lower stage of the Wenlock Series), first appears at least one full stratigraphic sequence lower in Laurentia than in Baltica. Rather than serving as a demonstration of the unreliability of conodont biostratigraphy, this example serves to demonstrate the promise of high-resolution Paleozoic stratigraphy. The temporal difference between the two first occurrences was likely less than 1million years, and although it is conceptually understood that speciation and colonization must have been non-instantaneous events, Paleozoic paleobiogeographic variability on such short timescales (tens to hundreds of kyr) traditionally has been ignored or considered to be of little practical importance. The expansion of high-resolution Paleozoic stratigraphy in the future will require robust biostratigraphic zonations that embrace the integration of multiple chronostratigraphic tools as well as the paleobiogeographic variability in ranges that they will inevitably demonstrate. In addition, a better understanding of the paleobiogeographic migration histories of marine organisms will provide a unique tool for future Paleozoic paleoceanography and paleobiology research. ?? 2010 Elsevier B.V.
Development of a perceptual hyperthermia index to evaluate heat strain during treadmill exercise.
Gallagher, Michael; Robertson, Robert J; Goss, Fredric L; Nagle-Stilley, Elizabeth F; Schafer, Mark A; Suyama, Joe; Hostler, David
2012-06-01
Fire suppression and rescue is a physiologically demanding occupation due to extreme external heat as well as the physical and thermal burden of the protective garments. These conditions challenge body temperature homeostasis and results in heat stress. Accurate field assessment of core temperature is complex and unreliable. The present investigation developed a perceptually based hyperthermia metric to measure physiologic exertional heat strain during treadmill exercise. Sixty-five (28.9 ± 6.8 years) female (n = 11) and male (n = 54) firefighters and non-firefighting volunteers participated in four related exertional heat stress investigations performing treadmill exercise in a heated room while wearing thermal protective clothing. Body core temperature, perceived exertion, and thermal sensation were assessed at baseline, 20-mins exercise, and at termination. Perceived exertion increased from baseline (0.24 ± 0.42) to termination (7.43 ± 1.86). Thermal sensation increased from baseline (1.78 ± 0.77) to termination (4.50 ± 0.68). Perceived exertion and thermal sensation were measured concurrently with body core temperature to develop a two-dimensional graphical representation of three exertional heat strain zones representative of a range of mean body core temperature responses such that low risk (green) incorporated 36.0-37.4°C, moderate risk (yellow) incorporated 37.5-37.9°C, and high risk (red) incorporated 38.0 to greater than 40.5°C. The perceptual hyperthermia index (PHI) may provide a quick and easy momentary assessment of the level of risk for exertional heat stress for firefighters engaged in fire suppression that may be beneficial in high-risk environments that threaten the lives of firefighters.
Bowie, Paul; Halley, Lyn; Blamey, Avril; Gillies, Jill; Houston, Neil
2016-01-29
To explore general practitioner (GP) team perceptions and experiences of participating in a large-scale safety and improvement pilot programme to develop and test a range of interventions that were largely new to this setting. Qualitative study using semistructured interviews. Data were analysed thematically. Purposive sample of multiprofessional study participants from 11 GP teams based in 3 Scottish National Health Service (NHS) Boards. 27 participants were interviewed. 3 themes were generated: (1) programme experiences and benefits, for example, a majority of participants referred to gaining new theoretical and experiential safety knowledge (such as how unreliable evidence-based care can be) and skills (such as how to search electronic records for undetected risks) related to the programme interventions; (2) improvements to patient care systems, for example, improvements in care systems reliability using care bundles were reported by many, but this was an evolving process strongly dependent on closer working arrangements between clinical and administrative staff; (3) the utility of the programme improvement interventions, for example, mixed views and experiences of participating in the safety climate survey and meeting to reflect on the feedback report provided were apparent. Initial theories on the utilisation and potential impact of some interventions were refined based on evidence. The pilot was positively received with many practices reporting improvements in safety systems, team working and communications with colleagues and patients. Barriers and facilitators were identified related to how interventions were used as the programme evolved, while other challenges around spreading implementation beyond this pilot were highlighted. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
In vivo PO2 imaging in the porcine model with perfluorocarbon F-19 NMR at low field.
Thomas, S R; Pratt, R G; Millard, R W; Samaratunga, R C; Shiferaw, Y; McGoron, A J; Tan, K K
1996-01-01
Quantitative pO2 imaging in vivo has been evaluated utilizing F-19 NMR in the porcine model at 0.14 T for the lungs, liver, and spleen following i.p. administration of the commercial perfluorotributylamine (FC-43)-based perfluorocarbon (PFC) emulsion, Oxypherol-ET. Calculated T1 maps obtained from a two spin-echo saturation recovery/inversion recovery (SR/IR) pulse protocol are converted into quantitative pO2 images through a temperature-dependent calibration curve relating longitudinal relaxation rate (1/T1) to pO2. The uncertainty in pO2 for a T1 measurement error of +/- 5% as encountered in establishing the calibration curves ranges from +/- 10 torr (+/- 40%) at 25 torr to +/- 16 torr (+/- 11%) at 150 torr for FC-43 (37 degrees C). However, additional uncertainties in T1 dependent upon the signal-to-noise ratio may be introduced through the SR/IR calculated T1 pulse protocol, which might severely degrade the pO2 accuracy. Correlation of the organ image calculated pO2 with directly measured pO2 in airway or blood pools in six pigs indicate that the PFC resident in lung is in near equilibrium with arterialized blood and not with airway pO2, suggesting a location distal to the alveolar epithelium. For the liver, the strongest correlation implying equilibrium was evident for venous blood (hepatic vein). For the spleen, arterial blood pO2 (aorta) was an unreliable predictor of pO2 for PFC resident in splenic tissue. The results have demonstrated the utility and defined the limiting aspects quantitative pO2 imaging in vivo using F-19 MRI of sequestered PFC materials.
Ustün, B; Compton, W; Mager, D; Babor, T; Baiyewu, O; Chatterji, S; Cottler, L; Göğüş, A; Mavreas, V; Peters, L; Pull, C; Saunders, J; Smeets, R; Stipec, M R; Vrasti, R; Hasin, D; Room, R; Van den Brink, W; Regier, D; Blaine, J; Grant, B F; Sartorius, N
1997-09-25
The WHO Study on the reliability and validity of the alcohol and drug use disorder instruments in an international study which has taken place in centres in ten countries, aiming to test the reliability and validity of three diagnostic instruments for alcohol and drug use disorders: the Composite International Diagnostic Interview (CIDI), the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and a special version of the Alcohol Use Disorder and Associated Disabilities Interview schedule-alcohol/drug-revised (AUDADIS-ADR). The purpose of the reliability and validity (R&V) study is to further develop the alcohol and drug sections of these instruments so that a range of substance-related diagnoses can be made in a systematic, consistent, and reliable way. The study focuses on new criteria proposed in the tenth revision of the International Classification of Diseases (ICD-10) and the fourth revision of the diagnostic and statistical manual of mental disorders (DSM-IV) for dependence, harmful use and abuse categories for alcohol and psychoactive substance use disorders. A systematic study including a scientifically rigorous measure of reliability (i.e. 1 week test-retest reliability) and validity (i.e. comparison between clinical and non-clinical measures) has been undertaken. Results have yielded useful information on reliability and validity of these instruments at diagnosis, criteria and question level. Overall the diagnostic concordance coefficients (kappa, kappa) were very good for dependence disorders (0.7-0.9), but were somewhat lower for the abuse and harmful use categories. The comparisons among instruments and independent clinical evaluations and debriefing interviews gave important information about possible sources of unreliability, and provided useful clues on the applicability and consistency of nosological concepts across cultures.
Martínez Lomakin, Felipe; Tobar, Catalina
2014-12-01
Contrast-induced nephropathy (CIN) is a common event in hospitals, with reported incidences ranging from 1 to 30%. Patients with underlying kidney disease have an increased risk of developing CIN. Point-of-care (POC) creatinine devices are handheld devices capable of providing quantitative data on a patient's kidney function that could be useful in stratifying preventive measures. This overview aims to synthesize the current evidence on diagnostic accuracy and clinical utility of POC creatinine devices in detecting patients at risk of CIN. Five databases were searched for diagnostic accuracy studies or clinical trials that evaluated the usefulness of POC devices in detecting patients at risk of CIN. Selected articles were critically appraised to assess their individual risk of bias by the use of standard criteria; 13 studies were found that addressed the diagnostic accuracy or clinical utility of POC creatinine devices. Most studies incurred a moderate to high risk of bias. Overall concordance between POC devices and reference standards (clinical laboratory procedures) was found to be moderate, with 95% limits of agreement often lying between -35.4 and +35.4 µmol/L (-0.4 and +0.4 mg/dL). Concordance was shown to decrease with worsening kidney function. Data on the clinical utility of these devices were limited, but a significant reduction in time to diagnosis was reported in two studies. Overall, POC creatinine devices showed a moderate concordance with standard clinical laboratory creatinine measurements. Several biases could have induced optimism in these estimations. Results obtained from these devices may be unreliable in cases of severe kidney failure. Randomized trials are needed to address the clinical utility of these devices.
Corkeron, Peter; Rolland, Rosalind M; Hunt, Kathleen E; Kraus, Scott D
2017-01-01
Immunoassay of hormone metabolites extracted from faecal samples of free-ranging large whales can provide biologically relevant information on reproductive state and stress responses. North Atlantic right whales ( Eubalaena glacialis Müller 1776) are an ideal model for testing the conservation value of faecal metabolites. Almost all North Atlantic right whales are individually identified, most of the population is sighted each year, and systematic survey effort extends back to 1986. North Atlantic right whales number <500 individuals and are subject to anthropogenic mortality, morbidity and other stressors, and scientific data to inform conservation planning are recognized as important. Here, we describe the use of classification trees as an alternative method of analysing multiple-hormone data sets, building on univariate models that have previously been used to describe hormone profiles of individual North Atlantic right whales of known reproductive state. Our tree correctly classified the age class, sex and reproductive state of 83% of 112 faecal samples from known individual whales. Pregnant females, lactating females and both mature and immature males were classified reliably using our model. Non-reproductive [i.e. 'resting' (not pregnant and not lactating) and immature] females proved the most unreliable to distinguish. There were three individual males that, given their age, would traditionally be considered immature but that our tree classed as mature males, possibly calling for a re-evaluation of their reproductive status. Our analysis reiterates the importance of considering the reproductive state of whales when assessing the relationship between cortisol concentrations and stress. Overall, these results confirm findings from previous univariate statistical analyses, but with a more robust multivariate approach that may prove useful for the multiple-analyte data sets that are increasingly used by conservation physiologists.
Elliott, Rohan A; Goeman, Dianne; Beanland, Christine; Koch, Susan
2015-01-01
Impaired cognition has a significant impact on a person's ability to manage their medicines. The aim of this paper is to provide a narrative review of contemporary literature on medicines management by people with dementia or cognitive impairment living in the community, methods for assessing their capacity to safely manage medicines, and strategies for supporting independent medicines management. Studies and reviews addressing medicines management by people with dementia or cognitive impairment published between 2003 and 2013 were identified via searches of Medline and other databases. The literature indicates that as cognitive impairment progresses, the ability to plan, organise, and execute medicine management tasks is impaired, leading to increased risk of unintentional non-adherence, medication errors, preventable medication-related hospital admissions and dependence on family carers or community nursing services to assist with medicines management. Impaired functional capacity may not be detected by health professionals in routine clinical encounters. Assessment of patients' (or carers') ability to safely manage medicines is not undertaken routinely, and when it is there is variability in the methods used. Self-report and informant report may be helpful, but can be unreliable or prone to bias. Measures of cognitive function are useful, but may lack sensitivity and specificity. Direct observation, using a structured, standardised performance-based tool, may help to determine whether a person is able to manage their medicines and identify barriers to adherence such as inability to open medicine packaging. A range of strategies have been used to support independent medicines management in people with cognitive impairment, but there is little high-quality research underpinning these strategies. Further studies are needed to develop and evaluate approaches to facilitate safe medicines management by older people with cognitive impairment and their carers.
Elliott, Rohan A.; Goeman, Dianne; Beanland, Christine; Koch, Susan
2015-01-01
Impaired cognition has a significant impact on a person’s ability to manage their medicines. The aim of this paper is to provide a narrative review of contemporary literature on medicines management by people with dementia or cognitive impairment living in the community, methods for assessing their capacity to safely manage medicines, and strategies for supporting independent medicines management. Studies and reviews addressing medicines management by people with dementia or cognitive impairment published between 2003 and 2013 were identified via searches of Medline and other databases. The literature indicates that as cognitive impairment progresses, the ability to plan, organise, and execute medicine management tasks is impaired, leading to increased risk of unintentional non-adherence, medication errors, preventable medication-related hospital admissions and dependence on family carers or community nursing services to assist with medicines management. Impaired functional capacity may not be detected by health professionals in routine clinical encounters. Assessment of patients’ (or carers’) ability to safely manage medicines is not undertaken routinely, and when it is there is variability in the methods used. Self-report and informant report may be helpful, but can be unreliable or prone to bias. Measures of cognitive function are useful, but may lack sensitivity and specificity. Direct observation, using a structured, standardised performance-based tool, may help to determine whether a person is able to manage their medicines and identify barriers to adherence such as inability to open medicine packaging. A range of strategies have been used to support independent medicines management in people with cognitive impairment, but there is little high-quality research underpinning these strategies. Further studies are needed to develop and evaluate approaches to facilitate safe medicines management by older people with cognitive impairment and their carers. PMID:26265487
Bowie, Paul; Halley, Lyn; Blamey, Avril; Gillies, Jill; Houston, Neil
2016-01-01
Objectives To explore general practitioner (GP) team perceptions and experiences of participating in a large-scale safety and improvement pilot programme to develop and test a range of interventions that were largely new to this setting. Design Qualitative study using semistructured interviews. Data were analysed thematically. Subjects and setting Purposive sample of multiprofessional study participants from 11 GP teams based in 3 Scottish National Health Service (NHS) Boards. Results 27 participants were interviewed. 3 themes were generated: (1) programme experiences and benefits, for example, a majority of participants referred to gaining new theoretical and experiential safety knowledge (such as how unreliable evidence-based care can be) and skills (such as how to search electronic records for undetected risks) related to the programme interventions; (2) improvements to patient care systems, for example, improvements in care systems reliability using care bundles were reported by many, but this was an evolving process strongly dependent on closer working arrangements between clinical and administrative staff; (3) the utility of the programme improvement interventions, for example, mixed views and experiences of participating in the safety climate survey and meeting to reflect on the feedback report provided were apparent. Initial theories on the utilisation and potential impact of some interventions were refined based on evidence. Conclusions The pilot was positively received with many practices reporting improvements in safety systems, team working and communications with colleagues and patients. Barriers and facilitators were identified related to how interventions were used as the programme evolved, while other challenges around spreading implementation beyond this pilot were highlighted. PMID:26826149
Shimada, Kenshu; Egi, Naoko; Tsubamoto, Takehisa; Maung-Maung, Maung-Maung; Thaung-Htike, Thaung-Htike; Zin-Maung-Maung-Thein, Zin-Maung-Maung-Thein; Nishioka, Yuichiro; Sonoda, Teppei; Takai, Masanaru
2016-09-05
We redescribe an extinct river shark, Glyphis pagoda (Noetling), on the basis of 20 teeth newly collected from three different Miocene localities in Myanmar. One locality is a nearshore marine deposit (Obogon Formation) whereas the other two localities represent terrestrial freshwater deposits (Irrawaddy sediments), suggesting that G. pagoda from the Irrawaddy sediments was capable of tolerating low salinity like the extant Glyphis. Glyphis pagoda likely reached up to at least 185 cm in total body length and was probably piscivorous. The fossil species occurs in rocks of Myanmar and eastern and western India and stratigraphically ranges at least from the Lower Miocene (Aquitanian) to the lower Upper Miocene (mid-Tortonian). It has been classified under at least eight other genera to date, along with numerous taxonomic synonyms largely stemming from the lack of understanding of the heterodonty in extant Glyphis in the original description. Our literature review suggests that known Miocene shark faunas, particularly those in India, are manifested with unreliable taxonomic identifications and outdated classifications that warrant the need for a comprehensive taxonomic review in order to evaluate the evolutionary history and diversity pattern of Miocene shark faunas. The genus Glyphis has a roughly 23-million-year-long history, and its success may be related to the evolution of its low salinity tolerance. While extant Glyphis spp. are considered to be particularly vulnerable to habitat degradation and overfishing, the fossil record of G. pagoda provides renewed perspective on the natural history of the genus that can be taken into further consideration for conservation biology of the extant forms.
Effective Temperatures for Young Stars in Binaries
NASA Astrophysics Data System (ADS)
Muzzio, Ryan; Avilez, Ian; Prato, Lisa A.; Biddle, Lauren I.; Allen, Thomas; Wright-Garba, Nuria Meilani Laure; Wittal, Matthew
2017-01-01
We have observed about 100 multi-star systems, within the star forming regions Taurus and Ophiuchus, to investigate the individual stellar and circumstellar properties of both components in young T Tauri binaries. Near-infrared spectra were collected using the Keck II telescope’s NIRSPEC spectrograph and imaging data were taken with Keck II’s NIRC2 camera, both behind adaptive optics. Some properties are straightforward to measure; however, determining effective temperature is challenging as the standard method of estimating spectral type and relating spectral type to effective temperature can be subjective and unreliable. We explicitly looked for a relationship between effective temperatures empirically determined in Mann et al. (2015) and equivalent width ratios of H-band Fe and OH lines for main sequence spectral type templates common to both our infrared observations and to the sample of Mann et al. We find a fit for a wide range of temperatures and are currently testing the validity of using this method as a way to determine effective temperature robustly. Support for this research was provided by an REU supplement to NSF award AST-1313399.
Karim, A.K.M. Rezaul; Proulx, Michael J.; Likova, Lora T.
2016-01-01
Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose a three-stage model of directionality bias in visuospatial functioning. We call this model the ‘Perception-Action-Laterality’ (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases– how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. PMID:27350096
Pretty, Iain A; Maupomé, Gerardo
2004-04-01
Dentists are involved in diagnosing disease in every aspect of their clinical practice. A range of tests, systems, guides and equipment--which can be generally referred to as diagnostic procedures--are available to aid in diagnostic decision making. In this era of evidence-based dentistry, and given the increasing demand for diagnostic accuracy and properly targeted health care, it is important to assess the value of such diagnostic procedures. Doing so allows dentists to weight appropriately the information these procedures supply, to purchase new equipment if it proves more reliable than existing equipment or even to discard a commonly used procedure if it is shown to be unreliable. This article, the first in a 6-part series, defines several concepts used to express the usefulness of diagnostic procedures, including reliability and validity, and describes some of their operating characteristics (statistical measures of performance), in particular, specificity and sensitivity. Subsequent articles in the series will discuss the value of diagnostic procedures used in daily dental practice and will compare today's most innovative procedures with established methods.
From Stochastic Foam to Designed Structure: Balancing Cost and Performance of Cellular Metals
Lehmhus, Dirk; Vesenjak, Matej
2017-01-01
Over the past two decades, a large number of metallic foams have been developed. In recent years research on this multi-functional material class has further intensified. However, despite their unique properties only a limited number of large-scale applications have emerged. One important reason for this sluggish uptake is their high cost. Many cellular metals require expensive raw materials, complex manufacturing procedures, or a combination thereof. Some attempts have been made to decrease costs by introducing novel foams based on cheaper components and new manufacturing procedures. However, this has often yielded materials with unreliable properties that inhibit utilization of their full potential. The resulting balance between cost and performance of cellular metals is probed in this editorial, which attempts to consider cost not in absolute figures, but in relation to performance. To approach such a distinction, an alternative classification of cellular metals is suggested which centers on structural aspects and the effort of realizing them. The range thus covered extends from fully stochastic foams to cellular structures designed-to-purpose. PMID:28786935
Yeast identification: reassessment of assimilation tests as sole universal identifiers.
Spencer, J; Rawling, S; Stratford, M; Steels, H; Novodvorska, M; Archer, D B; Chandra, S
2011-11-01
To assess whether assimilation tests in isolation remain a valid method of identification of yeasts, when applied to a wide range of environmental and spoilage isolates. Seventy-one yeast strains were isolated from a soft drinks factory. These were identified using assimilation tests and by D1/D2 rDNA sequencing. When compared to sequencing, assimilation test identifications (MicroLog™) were 18·3% correct, a further 14·1% correct within the genus and 67·6% were incorrectly identified. The majority of the latter could be attributed to the rise in newly reported yeast species. Assimilation tests alone are unreliable as a universal means of yeast identification, because of numerous new species, variability of strains and increasing coincidence of assimilation profiles. Assimilation tests still have a useful role in the identification of common species, such as the majority of clinical isolates. It is probable, based on these results, that many yeast identifications reported in older literature are incorrect. This emphasizes the crucial need for accurate identification in present and future publications. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Guillong, M.; Schmitt, A. K.; Bachmann, O.
2015-04-01
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) of eight zircon reference materials and synthetic zircon-hafnon end-members indicate that corrections for abundance sensitivity and molecular zirconium sesquioxide ions (Zr2O3+) are critical for reliable determination of 230Th abundances in zircon. Other polyatomic interferences in the mass range 223-233 amu are insignificant. When corrected for abundance sensitivity and interferences, activity ratios of (230Th)/(238U) for the zircon reference materials we used average 1.001 ± 0.010 (1σ error; mean square of weighted deviates MSWD = 1.45; n = 8). This includes the 91500 and Plešovice zircons, which were deemed unsuitable for calibration of (230Th)/(238U) by Ito (2014). Uranium series zircon ages generated by LA-ICP-MS without mitigating (e.g., by high mass resolution) or correcting for abundance sensitivity and molecular interferences on 230Th such as those presented by Ito (2014) are potentially unreliable.
Albin, Thomas J; Vink, Peter
2014-11-01
Designers and ergonomists may occasionally be limited to using tables of percentiles of anthropometric data to model users. Design models that add or subtract percentiles produce unreliable estimates of the proportion of users accommodated, in part because they assume a perfect correlation between variables. Percentile data do not allow the use of more reliable modeling methods such as Principle Component Analysis. A better method is needed. A new method for modeling with limited data is described. It uses measures of central tendency (median or mean) of the range of possible correlation values to estimate the combined variance is shown to reduce error compared to combining percentiles. Second, use of the Chebyshev inequality allows the designer to more reliably estimate the percent accommodation when the distributions of the underlying anthropometric data are unknown than does combining percentiles. This paper describes a modeling method that is more accurate than combining percentiles when only limited data are available. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Yanfeng; Hui, Fei; Shi, Yuanyuan
The conductive atomic force microscope (CAFM) has become an essential tool for the nanoscale electronic characterization of many materials and devices. When studying photoactive samples, the laser used by the CAFM to detect the deflection of the cantilever can generate photocurrents that perturb the current signals collected, leading to unreliable characterization. In metal-coated semiconductor samples, this problem is further aggravated, and large currents above the nanometer range can be observed even without the application of any bias. Here we present the first characterization of the photocurrents introduced by the laser of the CAFM, and we quantify the amount of lightmore » arriving to the surface of the sample. The mechanisms for current collection when placing the CAFM tip on metal-coated photoactive samples are also analyzed in-depth. Finally, we successfully avoided the laser-induced perturbations using a two pass technique: the first scan collects the topography (laser ON) and the second collects the current (laser OFF). We also demonstrate that CAFMs without a laser (using a tuning fork for detecting the deflection of the tip) do not have this problem.« less
Conserved queen pheromones in bumblebees: a reply to Amsalem et al.
Holman, Luke; van Zweden, Jelle S; Oliveira, Ricardo C; van Oystaeyen, Annette; Wenseleers, Tom
2017-01-01
In a recent study, Amsalem, Orlova & Grozinger (2015) performed experiments with Bombus impatiens bumblebees to test the hypothesis that saturated cuticular hydrocarbons are evolutionarily conserved signals used to regulate reproductive division of labor in many Hymenopteran social insects. They concluded that the cuticular hydrocarbon pentacosane (C 25 ), previously identified as a queen pheromone in a congeneric bumblebee, does not affect worker reproduction in B. impatiens . Here we discuss some shortcomings of Amsalem et al.'s study that make its conclusions unreliable. In particular, several confounding effects may have affected the results of both experimental manipulations in the study. Additionally, the study's low sample sizes (mean n per treatment = 13.6, range: 4-23) give it low power, not 96-99% power as claimed, such that its conclusions may be false negatives. Inappropriate statistical tests were also used, and our reanalysis found that C 25 substantially reduced and delayed worker egg laying in B. impatiens . We review the evidence that cuticular hydrocarbons act as queen pheromones, and offer some recommendations for future queen pheromone experiments.
The Magnetic Origins of Solar Activity
NASA Technical Reports Server (NTRS)
Antiochos, S. K.
2012-01-01
The defining physical property of the Sun's corona is that the magnetic field dominates the plasma. This property is the genesis for all solar activity ranging from quasi-steady coronal loops to the giant magnetic explosions observed as coronal mass ejections/eruptive flares. The coronal magnetic field is also the fundamental driver of all space weather; consequently, understanding the structure and dynamics of the field, especially its free energy, has long been a central objective in Heliophysics. The main obstacle to achieving this understanding has been the lack of accurate direct measurements of the coronal field. Most attempts to determine the magnetic free energy have relied on extrapolation of photospheric measurements, a notoriously unreliable procedure. In this presentation I will discuss what measurements of the coronal field would be most effective for understanding solar activity. Not surprisingly, the key process for driving solar activity is magnetic reconnection. I will discuss, therefore, how next-generation measurements of the coronal field will allow us to understand not only the origins of space weather, but also one of the most important fundamental processes in cosmic and laboratory plasmas.
von Rohden, Christoph; Kreuzer, Andreas; Chen, Zongyu; Aeschbach-Hertig, Werner
2010-09-01
We employed environmental tracers ((3)H-(3)He, SF(6)) in a study investigating the groundwater recharge in the North China Plain (NCP), a sedimentary aquifer system consisting of fluvial and alluvial river deposits near the city of Shijiazhuang. The (3)H-(3)He dating method revealed reasonable results for the young groundwater with ages covering the range of recent to ~40 a. SF(6) samples were taken in parallel for independent dating and to compare the applicability of both methods. However, the SF(6)-results are influenced and, in part, dominated by a systematic non-atmospheric component, revealing that the dating with SF(6) is unreliable in this region. A correlation of non-atmospheric SF(6) and (3)H-(3)He ages suggests a continuous accumulation of natural SF(6) in the groundwater of the NCP aquifers. Although terrigenic SF(6) has previously been associated with crystalline or igneous rocks, our results indicate that it can also be accumulated in sandy aquifers on the timescale relevant for SF(6) dating.
Thurman, E.M.; Zimmerman, L.R.; Aga, D.S.; Gilliom, R.J.
2001-01-01
Gas chromatography with isotope dilution mass spectrometry (GC-MS) and enzyme-linked immunosorbent assay (ELISA) were used in regional National Water Quality Assessment studies of the herbicides, 2,4-D and dicamba, in river water across the United States. The GC-MS method involved solid-phase extraction, derivatized with deutemted 2,4-D, and analysis by selected ion monitoring. The ELISA method was applied after preconcentration with solid-phase extraction. The ELISA method was unreliable because of interference from humic substances that were also isolated by solid-phase extraction. Therefore, GC-MS was used to analyzed 80 samples from river water from 14 basins. The frequency of detection of dicamba (28%) was higher than that for 2,4-D (16%). Concentrations were higher for dicamba than for 2,4-D, ranging from less than the detection limit (<0.05 ??g/L) to 3.77 ??g/L, in spite of 5 times more annual use of 2,4-D as compared to dicamba. These results suggest that 2,4-D degrades more rapidly in the environment than dicamba.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Lessons Learned from Monitoring Drought in Data Sparse Regions in the United States
NASA Astrophysics Data System (ADS)
Edwards, L. M.; Redmond, K. T.
2011-12-01
Drought monitoring in the geographic domain represented by the Western Regional Climate Center (WRCC) in the United States can serve as an example of many of the challenges that face a global drought early warning system (GDEWS). The WRCC area includes numerous climate regions, such as: the Pacific coast of the continental U.S., the lowest elevation in North America, arid and alpine environments, temperate rainforest, Alaska, Hawaii and the Pacific territories of the U.S. in the tropics. This area is quite diverse in its climatological regimes, from rainforest to high desert to tundra, and covers a large area of land and water. Drought in the WRCC domain affects a wide range of constituents and interests, and the complex interplay between "human-caused" and natural drought cannot be understated. Data to support a GDEWS, as in the WRCC region, is often non-existent or unreliable in remote locations. Even in the continental U.S., data is not as dense as the topography and climate zones demand for accurate drought assessment. Challenges and efforts to address drought monitoring at the WRCC will be presented.
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Field measurement of alkalinity and pH
Barnes, Ivan
1964-01-01
The behavior of electrometric pH equipment under field conditions departs from the behavior predicted from Nernst's law. The response is a linear function of pH, and hence measured pH values may be corrected to true pH if the instrument is calibrated with two reference solutions for each measurement. Alkalinity titrations may also be made in terms of true pH. Standard methods, such as colorimetric titrations, were rejected as unreliable or too cumbersome for rapid field use. The true pH of the end point of the alkalinity titration as a function of temperature, ionic strength, and total alkalinity has been calculated. Total alkalinity in potable waters is the most important factor influencing the end point pH, which varies from 5.38 (0 ? C, 5 ppm (parts per million) HC0a-) to 4.32 (300 ppm HC0a-,35 ? C), for the ranges of variables considered. With proper precautions, the pH may be determined to =i:0.02 pH and the alkalinity to =i:0.6 ppm HCO3- for many naturally occurring bodies of fresh water.
Analysis of human scream and its impact on text-independent speaker verification.
Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid
2017-04-01
Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.
Quality of Computationally Inferred Gene Ontology Annotations
Škunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe
2012-01-01
Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439
Brown, Alexandra E; Okayasu, Hiromasa; Nzioki, Michael M; Wadood, Mufti Z; Chabot-Couture, Guillaume; Quddus, Arshad; Walker, George; Sutter, Roland W
2014-11-01
Monitoring the quality of supplementary immunization activities (SIAs) is a key tool for polio eradication. Regular monitoring data, however, are often unreliable, showing high coverage levels in virtually all areas, including those with ongoing virus circulation. To address this challenge, lot quality assurance sampling (LQAS) was introduced in 2009 as an additional tool to monitor SIA quality. Now used in 8 countries, LQAS provides a number of programmatic benefits: identifying areas of weak coverage quality with statistical reliability, differentiating areas of varying coverage with greater precision, and allowing for trend analysis of campaign quality. LQAS also accommodates changes to survey format, interpretation thresholds, evaluations of sample size, and data collection through mobile phones to improve timeliness of reporting and allow for visualization of campaign quality. LQAS becomes increasingly important to address remaining gaps in SIA quality and help focus resources on high-risk areas to prevent the continued transmission of wild poliovirus. © Crown copyright 2014.
Nhan, Tu-Xuan; Parienti, Jean-Jacques; Badiou, Guillaume; Leclercq, Roland; Cattoir, Vincent
2012-11-01
The purpose of this retrospective study was to evaluate the pathogenic role of Corynebacterium species in lower respiratory tract infections as well as their routine laboratory investigation. From April 2007 to August 2009, 27 clinical isolates were significantly recovered from respiratory specimens of 27 different patients clinically suspected of having lower respiratory tract infections. The average age of patients was 65 years, while 22 (81%) patients presented at least 1 predisposing condition. Of the 27 patients, 15 (56%) were classified as infected according to Centers for Disease Control and Prevention/National Healthcare Safety Network criteria, with 93% of infections being hospital acquired. All isolates were accurately identified to the species level using molecular methods (i.e., 17 Corynebacterium pseudodiphtheriticum, 7 Corynebacterium striatum, and 3 Corynebacterium accolens), whereas phenotypic methods remained frequently unreliable for identifying C. striatum and C. accolens strains. All tested isolates were susceptible to amoxicillin, imipenem, vancomycin, linezolid, and tigecycline, whereas most of them were resistant to erythromycin. Copyright © 2012 Elsevier Inc. All rights reserved.
Yes, but how do we know it's true? Knowledge claims in massage and aromatherapy.
Vickers, A
1997-06-01
While there is evidence that both massage and aromatherapy can be of benefit, practitioners make a great number of claims about the clinical effects of their treatments. These are presented in literature as simple statements of fact, often with no attempt to explain the basis upon which the claim is made. Though authors do occasionally make reference to the scientific literature, they often do so inadequately and in many cases the cited papers do not support the claims being made. Some authors have been explicit in giving personal experience as the source of their knowledge. However, there are several reasons why it can be difficult to make general statements based on individual experience. The many inconsistencies found in massage and aromatherapy literature--such as different properties being given to the same oil--provide further evidence that the knowledge base of these therapies is unreliable. Practitioners need to develop a critical discourse by which they can evaluate knowledge claims.
The Role of Intuition in Risk/Benefit Decision-Making in Human Subjects Research
Resnik, David B.
2016-01-01
One of the key principles of ethical research involving human subjects is that the risks of research to should be acceptable in relation to expected benefits. Institutional review board (IRB) members often rely on intuition to make risk/benefit decisions concerning proposed human studies. Some have objected to using intuition to make these decisions because intuition is unreliable and biased and lacks transparency. In this paper, I examine the role of intuition in IRB risk/benefit decision-making and argue that there are practical and philosophical limits to our ability to reduce our reliance on intuition in this process. The fact that IRB risk/benefit decision-making involves intuition need not imply that it is hopelessly subjective or biased, however, since there are strategies that IRBs can employ to improve their decisions, such as using empirical data to estimate the probability of potential harms and benefits, developing classification systems to guide the evaluation of harms and benefits, and engaging in moral reasoning concerning the acceptability of risks. PMID:27294429
Health Worker mHealth Utilization: A Systematic Review
White, Alice; Thomas, Deborah S.K.; Ezeanochie, Nnamdi; Bull, Sheana
2016-01-01
This systematic review describes mHealth interventions directed at healthcare workers in low resource settings from the PubMed database from March, 2009 to May, 2015. Thirty-one articles were selected for final review. Four categories emerged from the reviewed articles: data collection during patient visits; communication between health workers and patients; communication between health workers; and public health surveillance. Most studies used a combination of quantitative and qualitative methods to assess acceptability of use, barriers to use, changes in healthcare delivery, and improved health outcomes. Few papers included theory explicitly to guide development and evaluation of their mHealth programs. Overall, evidence indicated that mobile technology tools, such as smartphones and tablets, substantially benefit healthcare workers, their patients, and health care delivery. Limitations to mHealth tools included insufficient program use and sustainability, unreliable Internet and electricity, and security issues. Despite these limitations, this systematic review demonstrates the utility of using mHealth in low-resource settings and the potential for widespread health system improvements using technology. PMID:26955009
Pricing health care services: applications to the health maintenance organization.
Sweeney, R E; Franklin, S P
1986-01-01
This article illustrates how management in one type of service industry, the health maintenance organization (HMO), have attempted to formalize pricing. This effort is complicated by both the intangibility of the service delivered and the relatively greater influence in service industries of non-cost price factors such as accessibility, psychology, and delays. The presentation describes a simple computerized approach that allows the marketing manager to formally estimate the effect of incremental changes in rates on the firm's projected patterns of enrollment growth and net revenues. The changes in turn reflect underlying variations in the mix of pricing influences including psychological and other factors. Enrollment projections are crucial to the firm's financial planning and staffing. In the past, most HMO enrollment and revenue projections of this kind were notoriously unreliable. The approach described here makes it possible for HMOs to fine-tune their pricing policies. It also provides a formal and easily understood mechanism by which management can evaluate and reach consensus on alternative scenarios for enrollment growth, staff recruitment and capacity expansion.
Analysis of an ethanol precipitate from ileal digesta: evaluation of a method to determine mucin.
Miner-Williams, Warren M; Moughan, Paul J; Fuller, Malcolm F
2013-11-06
The precipitation of mucin using high concentrations of ethanol has been used by many researchers while others have questioned the validity of the technique. In this study, analysis of an ethanol precipitate, from the soluble fraction of ileal digesta from pigs was undertaken using molecular weight profiling and polyacrylamide gel electrophoresis. The precipitate contained 201 mg·g⁻¹ protein, 87% of which had a molecular weight >20 KDa. Polyacrylamide gel electrophoresis stained with Coomassie blue and periodic acid/Schiff, revealed that most glycoprotein had a molecular weight between 37-100 KDa. The molecular weight of glycoprotein in the precipitate was therefore lower than that of intact mucin. These observations indicated that the glycoprotein in the ethanol precipitate was significantly degraded. The large amount of protein and carbohydrate in the supernatant from ethanol precipitation indicated that the precipitation of glycoprotein was incomplete. As a method for determining the concentration of mucin in digesta, ethanol precipitation is unreliable.
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP
Chun, Seung-Man; Park, Jong-Tae
2017-01-01
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation. PMID:28085109
The Role of Intuition in Risk/Benefit Decision-Making in Human Subjects Research.
Resnik, David B
2017-01-01
One of the key principles of ethical research involving human subjects is that the risks of research to should be acceptable in relation to expected benefits. Institutional review board (IRB) members often rely on intuition to make risk/benefit decisions concerning proposed human studies. Some have objected to using intuition to make these decisions because intuition is unreliable and biased and lacks transparency. In this article, I examine the role of intuition in IRB risk/benefit decision-making and argue that there are practical and philosophical limits to our ability to reduce our reliance on intuition in this process. The fact that IRB risk/benefit decision-making involves intuition need not imply that it is hopelessly subjective or biased, however, since there are strategies that IRBs can employ to improve their decisions, such as using empirical data to estimate the probability of potential harms and benefits, developing classification systems to guide the evaluation of harms and benefits, and engaging in moral reasoning concerning the acceptability of risks.
A Bayesian Approach to More Stable Estimates of Group-Level Effects in Contextual Studies.
Zitzmann, Steffen; Lüdtke, Oliver; Robitzsch, Alexander
2015-01-01
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP.
Chun, Seung-Man; Park, Jong-Tae
2017-01-12
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.
The sensitivity of radiography of the postoperative stomach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, D.J.; Munitz, H.A.; Gelfand, D.W.
1982-09-01
The results of radiology and endoscopy were compared in 140 patients who had undergone gastric surgery for ulcer disease. Of 74 patients who were examined with single-contrast radiography, 37 had abnormalities that were demonstrated endoscopically. The radiographic sensitivities in these patients were: gastritis 2/22 (9%); ulcer 3/5 (60%); obstruction 8/8 (100%); and miscellaneous abnormalities 2/2 (100%). The predictive accuracy of a diagnois of ulcer was 38%. Of the 66 patients who were examined with double-contrast radiography, 33 abnormalities were found with endoscopy. The radiographic sensitivities were: gastritis 3/13 (23%); ulcer 7/10 (70%); obstruction 4/4 (100%); and miscellaneous abnormalities 6/6 (100%).more » The predictive accuracy of a diagnosis of ulcer was 44%. Radiology appears to be unreliable in diagnosing gastritis and recurrent ulceration in the post-operation stomach. The double-contrast technique does not offer significant improvement over the single-contrast method in evaluating these postoperative problems.« less