Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
ERIC Educational Resources Information Center
Conn, Katharine
2014-01-01
In the last three decades, there has been a large increase in the number of rigorous experimental and quasi-experimental evaluations of education programs in developing countries. These impact evaluations have taken place all over the globe, including a large number in Sub-Saharan Africa (SSA). The fact that the developing world is socially and…
ERIC Educational Resources Information Center
Landy, David; Silbert, Noah; Goldin, Aleah
2013-01-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Challenges in evaluating cancer as a clinical outcome in postapproval studies of drug safety
Pinheiro, Simone P.; Rivera, Donna R.; Graham, David J.; Freedman, Andrew N.; Major, Jacqueline M.; Penberthy, Lynne; Levenson, Mark; Bradley, Marie C.; Wong, Hui-Lee; Ouellet-Hellstrom, Rita
2017-01-01
Pharmaceuticals approved in the United States are largely not known human carcinogens. However, cancer signals associated with pharmaceuticals may be hypothesized or arise after product approval. There are many study designs that can be used to evaluate cancer as an outcome in the postapproval setting. Because prospective systematic collection of cancer outcomes from a large number of individuals may be lengthy, expensive, and challenging, leveraging data from large existing databases are an integral approach. Such studies have the capability to evaluate the clinical experience of a large number of individuals, yet there are unique methodological challenges involved in their use to evaluate cancer outcomes. To discuss methodological challenges and potential solutions, the Food and Drug Administration and the National Cancer Institute convened a two-day public meeting in 2014. This commentary summarizes the most salient issues discussed at the meeting. PMID:27663208
Challenges in evaluating cancer as a clinical outcome in postapproval studies of drug safety.
Pinheiro, Simone P; Rivera, Donna R; Graham, David J; Freedman, Andrew N; Major, Jacqueline M; Penberthy, Lynne; Levenson, Mark; Bradley, Marie C; Wong, Hui-Lee; Ouellet-Hellstrom, Rita
2016-11-01
Pharmaceuticals approved in the United States are largely not known human carcinogens. However, cancer signals associated with pharmaceuticals may be hypothesized or arise after product approval. There are many study designs that can be used to evaluate cancer as an outcome in the postapproval setting. Because prospective systematic collection of cancer outcomes from a large number of individuals may be lengthy, expensive, and challenging, leveraging data from large existing databases are an integral approach. Such studies have the capability to evaluate the clinical experience of a large number of individuals, yet there are unique methodological challenges involved in their use to evaluate cancer outcomes. To discuss methodological challenges and potential solutions, the Food and Drug Administration and the National Cancer Institute convened a two-day public meeting in 2014. This commentary summarizes the most salient issues discussed at the meeting. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.
2016-01-01
Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…
Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges
ERIC Educational Resources Information Center
Penuel, William R.; Means, Barbara
2011-01-01
Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…
Landy, David; Silbert, Noah; Goldin, Aleah
2013-07-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions predict a log-to-linear shift: People will either place numbers linearly or will place numbers according to a compressive logarithmic or power-shaped function (Barth & Paladino, ; Siegler & Opfer, ). While about half of people did estimate numbers linearly over this range, nearly all the remaining participants placed 1 million approximately halfway between 1 thousand and 1 billion, but placed numbers linearly across each half, as though they believed that the number words "thousand, million, billion, trillion" constitute a uniformly spaced count list. Participants in this group also tended to be optimistic in evaluations of largely ineffective political strategies, relative to linear number-line placers. The results indicate that the surface structure of number words can heavily influence processes for dealing with numbers in this range, and it can amplify the possibility that analogous surface regularities are partially responsible for parallel phenomena in children. In addition, these results have direct implications for lawmakers and scientists hoping to communicate effectively with the public. Copyright © 2013 Cognitive Science Society, Inc.
Categories of Large Numbers in Line Estimation
ERIC Educational Resources Information Center
Landy, David; Charlesworth, Arthur; Ottmar, Erin
2017-01-01
How do people stretch their understanding of magnitude from the experiential range to the very large quantities and ranges important in science, geopolitics, and mathematics? This paper empirically evaluates how and whether people make use of numerical categories when estimating relative magnitudes of numbers across many orders of magnitude. We…
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Aegerter, Philippe; Bendersky, Noelle; Tran, Thi-Chien; Ropers, Jacques; Taright, Namik; Chatellier, Gilles
2014-01-01
Recruitment of large samples of patients is crucial for evidence level and efficacy of clinical trials (CT). Clinical Trial Recruitment Support Systems (CTRSS) used to estimate patient recruitment are generally specific to Hospital Information Systems and few were evaluated on a large number of trials. Our aim was to assess, on a large number of CT, the usefulness of commonly available data as Diagnosis Related Groups (DRG) databases in order to estimate potential recruitment. We used the DRG database of a large French multicenter medical institution (1.2 million inpatient stays and 400 new trials each year). Eligibility criteria of protocols were broken down into in atomic entities (diagnosis, procedures, treatments...) then translated into codes and operators recorded in a standardized form. A program parsed the forms and generated requests on the DRG database. A large majority of selection criteria could be coded and final estimations of number of eligible patients were close to observed ones (median difference = 25). Such a system could be part of the feasability evaluation and center selection process before the start of the clinical trial.
Credit Risk Evaluation of Large Power Consumers Considering Power Market Transaction
NASA Astrophysics Data System (ADS)
Fulin, Li; Erfeng, Xu; ke, Sun; Dunnan, Liu; Shuyi, Shen
2018-03-01
Large power users will participate in power market in various forms after power system reform. Meanwhile, great importance has always attached to the construction of the credit system in power industry. Due to the difference between the awareness of performance and the ability to perform, credit risk of power customer will emerge accordingly. Therefore, it is critical to evaluate credit risk of large power customers in the new situation of power market. Firstly, this paper constructs index system of credit risk of large power customers, and establishes evaluation model of interval number and AHP-entropy weight method.
Test techniques for evaluating flight displays
NASA Technical Reports Server (NTRS)
Haworth, Loran A.; Newman, Richard L.
1993-01-01
The rapid development of graphics technology allows for greater flexibility in aircraft displays, but display evaluation techniques have not kept pace. Historically, display evaluation has been based on subjective opinion and not on the actual aircraft/pilot performance. Existing electronic display specifications and evaluation techniques are reviewed. A display rating technique analogous to handling qualities ratings was developed and is recommended for future evaluations. The choice of evaluation pilots is also discussed and the use of a limited number of trained evaluators is recommended over the use of a large number of operational pilots.
77 FR 71574 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
... Test. OMB Control Number: None Form Number(s): The automated survey instrument has no form number. Type... have been developed and are now slated for a large-scale field test to evaluate the questions and the... reference period and timing of data collection. Qualitative research has [[Page 71575
Customer Overview of Pulsed Laser Heating for Evaluation of Gun Bore Materials
2015-05-01
Technical Report ARWSB-TR-15003 Customer Overview of Pulsed Laser Heating for Evaluation of Gun Bore Materials Mark E. Todaro...SUBTITLE Customer Overview of Pulsed Laser Heating for Evaluation of Gun Bore Materials 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...thermomechanical effects that occur at the bore of large and medium caliber guns during firing. Hence, PLH has been used not only to gain insight into the erosion
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
A Strategy for Detection of Inconsistency in Evaluation of Essay Type Answers
ERIC Educational Resources Information Center
Shukla, Archana; Chaudhary, Banshi D.
2014-01-01
The quality of evaluation of essay type answer books involving multiple evaluators for courses with large number of enrollments is likely to be affected due to heterogeneity in experience, expertise and maturity of evaluators. In this paper, we present a strategy to detect anomalies in evaluation of essay type answers by multiple evaluators based…
A numerical algorithm with preference statements to evaluate the performance of scientists.
Ricker, Martin
Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.
Neves, Justin; Lavis, John N; Ranson, M Kent
2012-08-02
Large multi-day conferences have often been criticized as ineffective ways to improve social outcomes and to influence policy or practice. Unfortunately, many conference evaluations have also been inadequate in determining the impact of a conference on its associated social sector, with little evidence gathered or analyzed to substantiate or refute these criticisms. The aim of this scoping review is to investigate and report stakeholders' objectives for planning or participating in large multi-day conferences and how these objectives are being evaluated. We conducted a scoping review supplemented by a small number of key informant interviews. Eight bibliographic databases were systematically searched to identify papers describing conference objectives and/or evaluations. We developed a conference evaluation framework based on theoretical models and empirical findings, which structured the descriptive synthesis of the data. We identified 3,073 potential papers for review, of which 44 were included in this study. Our evaluation framework connects five key elements in planning a conference and its evaluation (number in brackets refers to number of themes identified): conference objectives (8), purpose of evaluation (7), evaluation methods (5), indicators of success (9) and theories/models (8). Further analysis of indicators of success identified three categories of indicators with differing scopes (i.e. immediate, prospective or follow-up) as well as empirical links between the purpose of evaluations and these indicators. Conference objectives and evaluations were largely correlated with the type of conference (i.e. academic, political/governmental or business) but diverse overall. While much can be done to improve the quality and usefulness of conference evaluations, there are innovative assessments that are currently being utilized by some conferences and warrant further investigation. This review provides conference evaluators and organizers a simple resource to improve their own assessments by highlighting and categorizing potential objectives and evaluation strategies.
2012-01-01
Large multi-day conferences have often been criticized as ineffective ways to improve social outcomes and to influence policy or practice. Unfortunately, many conference evaluations have also been inadequate in determining the impact of a conference on its associated social sector, with little evidence gathered or analyzed to substantiate or refute these criticisms. The aim of this scoping review is to investigate and report stakeholders’ objectives for planning or participating in large multi-day conferences and how these objectives are being evaluated. We conducted a scoping review supplemented by a small number of key informant interviews. Eight bibliographic databases were systematically searched to identify papers describing conference objectives and/or evaluations. We developed a conference evaluation framework based on theoretical models and empirical findings, which structured the descriptive synthesis of the data. We identified 3,073 potential papers for review, of which 44 were included in this study. Our evaluation framework connects five key elements in planning a conference and its evaluation (number in brackets refers to number of themes identified): conference objectives (8), purpose of evaluation (7), evaluation methods (5), indicators of success (9) and theories/models (8). Further analysis of indicators of success identified three categories of indicators with differing scopes (i.e. immediate, prospective or follow-up) as well as empirical links between the purpose of evaluations and these indicators. Conference objectives and evaluations were largely correlated with the type of conference (i.e. academic, political/governmental or business) but diverse overall. While much can be done to improve the quality and usefulness of conference evaluations, there are innovative assessments that are currently being utilized by some conferences and warrant further investigation. This review provides conference evaluators and organizers a simple resource to improve their own assessments by highlighting and categorizing potential objectives and evaluation strategies. PMID:22857399
An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers
NASA Technical Reports Server (NTRS)
Wallace, James M.; Ong, L.; Balint, J.-L.
1993-01-01
The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.
Effectiveness of crack sealing on pavement serviceability and life.
DOT National Transportation Integrated Search
2011-06-01
This report presents the details of a study to evaluate effectiveness of Ohio Department of Transportations prevailing crack sealing program. Evaluation was performed through field monitoring a large number of crack sealed and control sections. Fi...
Leveraging Rigorous Local Evaluations to Understand Contradictory Findings
ERIC Educational Resources Information Center
Boulay, Beth; Martin, Carlos; Zief, Susan; Granger, Robert
2013-01-01
Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evaluations being conducted as part of four…
HierarchicalTopics: visually exploring large text collections using topic hierarchies.
Dou, Wenwen; Yu, Li; Wang, Xiaoyu; Ma, Zhiqiang; Ribarsky, William
2013-12-01
Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system--HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.
Sustainable Assessment and Evaluation Strategies for Open and Distance Learning
ERIC Educational Resources Information Center
Okonkwo, Charity Akuadi
2010-01-01
This paper first presents an overview of the concepts of assessment and evaluation in Open and Distance Learning (ODL) environment. The large numbers of students and numerous courses make assessment and evaluation very difficult and administrative nightmare at Distance Learning (DL) institutions. These challenges informed exploring issues relating…
Predicting developmental neurotoxicity in rodents from larval zebrafish - - and vice versa
The complexity of standard mammalian developmental neurotoxicity tests limits evaluation of large numbers of chemicals. Less complex, more rapid assays using larval zebrafish are gaining popularity for evaluating the developmental neurotoxicity of chemicals; there remains, howeve...
Gas-Centered Swirl Coaxial Liquid Injector Evaluations
NASA Technical Reports Server (NTRS)
Cohn, A. K.; Strakey, P. A.; Talley, D. G.
2005-01-01
Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.
LeSage, G D; Glaser, S S; Marucci, L; Benedetti, A; Phinizy, J L; Rodgers, R; Caligiuri, A; Papa, E; Tretjak, Z; Jezequel, A M; Holcomb, L A; Alpini, G
1999-05-01
Bile duct damage and/or loss is limited to a range of duct sizes in cholangiopathies. We tested the hypothesis that CCl4 damages only large ducts. CCl4 or mineral oil was given to bile duct-ligated (BDL) rats, and 1, 2, and 7 days later small and large cholangiocytes were purified and evaluated for apoptosis, proliferation, and secretion. In situ, we measured apoptosis by morphometric and TUNEL analysis and the number of small and large ducts by morphometry. Two days after CCl4 administration, we found an increased number of small ducts and reduced number of large ducts. In vitro apoptosis was observed only in large cholangiocytes, and this was accompanied by loss of proliferation and secretion in large cholangiocytes and loss of choleretic effect of secretin. Small cholangiocytes de novo express the secretin receptor gene and secretin-induced cAMP response. Consistent with damage of large ducts, we detected cytochrome P-4502E1 (which CCl4 converts to its radicals) only in large cholangiocytes. CCl4 induces selective apoptosis of large ducts associated with loss of large cholangiocyte proliferation and secretion.
Activity profiles of 309 ToxCast™ chemicals evaluated across 292 biochemical targets
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The present study is a performance evaluation and critical ...
Biodynamic evaluation of air traffic control students between 1960-1963.
DOT National Transportation Integrated Search
1971-03-01
Between 1960-1963, a large number of ATC students in training at the FAA Aeronautical Center in Oklahoma City underwent a broad spectrum of biomedical evaluations conducted by the Civil Aeromedical Research Institute (CARI). Approximately 1,270 of th...
Lessons from SMD experience with approaches to the evaluation of fare changes
DOT National Transportation Integrated Search
1980-01-01
Over the past several years UMTA's Service and Methods Demonstration Program (SMD) has undertaken a large number of studies of the effects of fare changes, both increases and decreases. Some of these studies have been large scale efforts directed at ...
Cho, Youngsuk; Je, Sangmo; Yoon, Yoo Sang; Roh, Hye Rin; Chang, Chulho; Kang, Hyunggoo; Lim, Taeho
2016-07-04
Students are largely providing feedback to one another when instructor facilitates peer feedback rather than teaching in group training. The number of students in a group affect the learning of students in the group training. We aimed to investigate whether a larger group size increases students' test scores on a post-training test with peer feedback facilitated by instructor after video-guided basic life support (BLS) refresher training. Students' one-rescuer adult BLS skills were assessed by a 2-min checklist-based test 1 year after the initial training. A cluster randomized controlled trial was conducted to evaluate the effect of student number in a group on BLS refresher training. Participants included 115 final-year medical students undergoing their emergency medicine clerkship. The median number of students was 8 in the large groups and 4 in the standard group. The primary outcome was to examine group differences in post-training test scores after video-guided BLS training. Secondary outcomes included the feedback time, number of feedback topics, and results of end-of-training evaluation questionnaires. Scores on the post-training test increased over three consecutive tests with instructor-led peer feedback, but not differ between large and standard groups. The feedback time was longer and number of feedback topics generated by students were higher in standard groups compared to large groups on the first and second tests. The end-of-training questionnaire revealed that the students in large groups preferred the smaller group size compared to their actual group size. In this BLS refresher training, the instructor-led group feedback increased the test score after tutorial video-guided BLS learning, irrespective of the group size. A smaller group size allowed more participations in peer feedback.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
NASA Astrophysics Data System (ADS)
Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe
2017-08-01
A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.
High Throughput Exposure Estimation Using NHANES Data (SOT)
In the ExpoCast project, high throughput (HT) exposure models enable rapid screening of large numbers of chemicals for exposure potential. Evaluation of these models requires empirical exposure data and due to the paucity of human metabolism/exposure data such evaluations includ...
R. James Barbour; Ryan Singleton; Douglas A. Maguire
2007-01-01
As landscape-scale assessments and modeling become a more common method for evaluating alternatives in integrated resource management, new techniques are needed to display and evaluate outcomes for large numbers of stands over long periods. In this proof of concept, we evaluate the potential to provide financial support for silvicultural treatments by selling timber...
Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Zhou, Ye
1999-01-01
Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Engineers and hydrologists use the curve number method to estimate runoff from rainfall for different land use and soil conditions; however, large uncertainties occur for estimates from forested watersheds. This investigation evaluates the accuracy and consistency of the method u...
Neurotoxicity in Aquatic Systems: Evaluation of Anthropogenic Trace Substances
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity, as well as acute and developmental neurotoxicity. In this endeavor, one of our focuses is on contaminants found in drinking water. To exp...
Characterization of Human Neural Progenitor Cell Models for Developmental Neurotoxicity Screening
Current testing methods for developmental neurotoxicity (DNT) make evaluation of the effects of large numbers of chemicals impractical and prohibitively expensive. As such, we are evaluating two different human neural progenitor cell (hNPC) models for their utility in screens for...
Ebola Virus Disease Candidate Vaccines Under Evaluation in Clinical Trials
Martins, Karen A.; Jahrling, Peter B.; Bavari, Sina; Kuhn, Jens H.
2016-01-01
Summary Filoviruses are the etiological agents of two human illnesses: Ebola virus disease and Marburg virus disease. Until 2013, medical countermeasure development against these afflictions was limited to only a few research institutes worldwide as both infections were considered exotic due to very low case numbers. Together with the high case-fatality rate of both diseases, evaluation of any candidate countermeasure in properly controlled clinical trials seemed impossible. However, in 2013, Ebola virus was identified as the etiological agent of a large disease outbreak in Western Africa including almost 30,000 infections and more than 11,000 deaths, including case exportations to Europe and North America. These large case numbers resulted in medical countermeasure development against Ebola virus disease becoming a global public-health priority. This review summarizes the status quo of candidate vaccines against Ebola virus disease, with a focus on those that are currently under evaluation in clinical trials. PMID:27160784
Current testing methods for developmental neurotoxicity (DNT) make evaluation of the effects of large numbers of chemicals impractical and prohibitively expensive. As such, we are evaluating human neural progenitor cells (NPCs) as a screen for DNT. ReNcell CX (ReN CX) cells are a...
An Integrated On-Line Transfer Credit Evaluation System-Admissions through Graduation Audit.
ERIC Educational Resources Information Center
Schuman, Chester D.
This document discusses a computerized transfer evaluation system designed by Pennsylvania College of Technology, a comprehensive two-year institution with an enrollment of over 4,800 students. It is noted that the Admissions Office processes approximately 500 transfer applications for a fall semester, as well as a large number of evaluations for…
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ye; Thornber, Ben
2016-04-12
Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology andmore » wind tunnel experiments.« less
A Roadmap for the Development of Alternative (Non-Animal) Methods for Systemic Toxicity Testing
Systemic toxicity testing forms the cornerstone for the safety evaluation of substances. Pressures to move from traditional animal models to novel technologies arise from various concerns, including: the need to evaluate large numbers of previously untested chemicals and new prod...
D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO RAY MIXTURE.
Risk assessors are becoming increasingly aware of the importance of assessing interactions between chemicals in a mixture. Most traditional designs for evaluating interactions are prohibitive when the number of chemicals in the mixture is large. However, evaluation of interacti...
D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.
Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...
Child Development and Childcare in Japan
ERIC Educational Resources Information Center
Anme, Tokie; Segal, Uma A.
2010-01-01
With increasing numbers of women joining the workforce, there is a need for quality childcare. This project, conducted in Japan and using a large number of participants, sought to standardize an evaluation scale to measure the development of children. The development of children under six years of age (N = 22,819) who are enrolled in childcare…
Educating Doctors on Evaluation of Fitness to Drive: Impact of a Case-Based Workshop
ERIC Educational Resources Information Center
Dow, Jamie; Jacques, Andre
2012-01-01
Introduction: In 2004, faced with demographic data predicting large increases in the number of older drivers within a relatively short period combined with the realization that screening for driver fitness was largely dependent on health professionals, principally physicians, the Societe de l'assurance automobile du Quebec (SAAQ) initiated…
Development and validation of a low-density SNP panel related to prolificacy in sheep
USDA-ARS?s Scientific Manuscript database
High-density SNP panels (e.g., 50,000 and 600,000 markers) have been used in exploratory population genetic studies with commercial and minor breeds of sheep. However, routine genetic diversity evaluations of large numbers of samples with large panels are in general cost-prohibitive for gene banks. ...
Evaluating Comparative Judgment as an Approach to Essay Scoring
ERIC Educational Resources Information Center
Steedle, Jeffrey T.; Ferrara, Steve
2016-01-01
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Development of Leaf Spectral Models for Evaluating Large Numbers of Sugarcane Genotypes
USDA-ARS?s Scientific Manuscript database
Leaf reflectance has been used to estimate crop leaf chemical and physiological characters. Sugarcane (Saccharum spp.) leaf N, C, and chlorophyll levels are important traits for high yields and perhaps useful for genotype evaluation. The objectives of this study were to identify sugarcane genotypic ...
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
Diagnosis and management of carotid stenosis: a review.
Nussbaum, E S
2000-01-01
Since its introduction in the 1950s, carotid endarterectomy has become one of the most frequently performed operations in the United States. The tremendous appeal of a procedure that decreases the risk of stroke, coupled with the large number of individuals in the general population with carotid stenosis, has contributed to its popularity. To provide optimal patient care, the practicing physician must have a firm understanding of the proper evaluation and management of carotid stenosis. Nevertheless, because of the large number of clinical trials performed over the last decade addressing the treatment of stroke and carotid endarterectomy, the care of patients with carotid stenosis remains a frequently misunderstood topic. This review summarizes the current evaluation and treatment options for carotid stenosis and provides a rational management algorithm for this prevalent disease process.
Presidential Search: Selecting the Top Candidates.
ERIC Educational Resources Information Center
Stead, Ronald S.
1988-01-01
The winnowing of a large number of presidential candidates to a smaller, more manageable group is discussed. Initial screening, candidate information, evaluation form, reference checking, and selecting interviewees are described. (MLW)
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2012-04-01
By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.
Specht, A; Montezano, D G; Sosa-Gómez, D R; Paula-Moraes, S V; Roque-Specht, V F; Barros, N M
2016-06-01
This study aimed to evaluate the effect of keeping three couples in the same cage, and the size of adults emerged from small, medium-sized and large pupae (278.67 mg; 333.20 mg and 381.58 mg, respectively), on the reproductive potential of S. eridania (Stoll, 1782) adults, under controlled conditions (25 ± 1 °C, 70% RH and 14 hour photophase). We evaluated the survival, number of copulations, fecundity and fertility of the adult females. The survival of females from these different pupal sizes did not differ statistically, but the survival of males from large pupae was statistically shorter than from small pupae. Fecundity differed significantly and correlated positively with size. The number of effective copulations (espematophores) and fertility did not vary significantly with pupal size. Our results emphasize the importance of indicating the number of copulations and the size of the insects when reproductive parameters are compared.
Evaluation of Flush-Mounted, S-Duct Inlets With Large Amounts of Boundary Layer Ingestion
NASA Technical Reports Server (NTRS)
Berrier, Bobby L.; Morehouse, Melissa B.
2003-01-01
A new high Reynolds number test capability for boundary layer ingesting inlets has been developed for the NASA Langley Research Center 0.3-Meter Transonic Cryogenic Tunnel. Using this new capability, an experimental investigation of four S-duct inlet configurations with large amounts of boundary layer ingestion (nominal boundary layer thickness of about 40% of inlet height) was conducted at realistic operating conditions (high subsonic Mach numbers and full-scale Reynolds numbers). The objectives of this investigation were to 1) develop a new high Reynolds number, boundary-layer ingesting inlet test capability, 2) evaluate the performance of several boundary layer ingesting S-duct inlets, 3) provide a database for CFD tool validation, and 4) provide a baseline inlet for future inlet flow-control studies. Tests were conducted at Mach numbers from 0.25 to 0.83, Reynolds numbers (based on duct exit diameter) from 5.1 million to a fullscale value of 13.9 million, and inlet mass-flow ratios from 0.39 to 1.58 depending on Mach number. Results of this investigation indicate that inlet pressure recovery generally decreased and inlet distortion generally increased with increasing Mach number. Except at low Mach numbers, increasing inlet mass-flow increased pressure recovery and increased distortion. Increasing the amount of boundary layer ingestion (by decreasing inlet throat height and increasing inlet throat width) or ingesting a boundary layer with a distorted profile decreased pressure recovery and increased distortion. Finally, increasing Reynolds number had almost no effect on inlet distortion but increased inlet recovery by about one-half percent at a Mach number near cruise.
Efficacy of Web-Based Personalized Normative Feedback: A Two-Year Randomized Controlled Trial
ERIC Educational Resources Information Center
Neighbors, Clayton; Lewis, Melissa A.; Atkins, David C.; Jensen, Megan M.; Walter, Theresa; Fossos, Nicole; Lee, Christine M.; Larimer, Mary E.
2010-01-01
Objective: Web-based brief alcohol interventions have the potential to reach a large number of individuals at low cost; however, few controlled evaluations have been conducted to date. The present study was designed to evaluate the efficacy of gender-specific versus gender-nonspecific personalized normative feedback (PNF) with single versus…
ERIC Educational Resources Information Center
Jordi Nebot, Lluïsa; Pàmies-Vilà, Rosa; Català Calderon, Pau; Puig-Ortiz, Joan
2013-01-01
This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d'Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya). These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process.…
Djuricic, Slavisa M; Grebeldinger, Slobodan; Kafka, Dejan I; Djan, Igor; Vukadin, Miroslav; Vasiljevic, Zorica V
2010-06-01
Cystic echinococcosis (CE) is a public health problem in countries having such endemic areas. Epidemiological studies of CE, especially pediatric, are rare. The aim of this study was to evaluate epidemiological and clinical characteristics of CE in children in Serbia. Data were obtained retrospectively from the case records of patients under the age of 18 years admitted for surgical treatment of CE at two large pediatric medical institutions in the period 1990-2006. Patients' age, number of cysts and their anatomic location were evaluated in relation to differences by patients' gender and socio-geographic status (urban or rural origin). The study included 149 children with 272 hydatid cysts. The mean age of patients was 10.1+/-3.8 years. There were no significant differences in the number of patients in relation to gender and urban:rural origin. There were no significant differences in patients' age at the time of surgery or the number of cysts per patient when patients' gender or socio-geographic status was evaluated. The anatomic location of cysts was as follows: liver (N=165; 60.7%), lungs (N=82; 30.1%), and other locations (N=25; 9.2%). Multiple cysts, and combined liver/lung involvement were identified in 34.2% (N=51), and 6.0% (N=9) of patients, respectively. Hepatic cysts were significantly more common in girls than in boys. There were no significant differences in anatomic location of cysts between socio-geographic groups. The large number of infected children during a long period of investigation indicates an active transmission of disease and a lack of program for control and prevention of CE in Serbia.
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2016-12-01
Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
ERIC Educational Resources Information Center
Skemer, Melanie; Valentine, Erin Jacobs
2016-01-01
Large numbers of young people in the United States were in foster care or in juvenile justice custody as teenagers, and many of them have a difficult time making a successful transition to independent adulthood as they leave these systems. Most of them faced a number of disadvantages during childhood and often have poor outcomes across several…
Runoff curve numbers for 10 small forested watersheds in the mountains of the eastern United States
Negussie H. Tedela; Steven C. McCutcheon; Todd C. Rasmussen; Richard H. Hawkins; Wayne T. Swank; John L. Campbell; Mary Beth Adams; C. Rhett Jackson; Ernest W. Tollner
2012-01-01
Engineers and hydrologists use the curve number method to estimate runoff from rainfall for different land use and soil conditions; however, large uncertainties occur for estimates from forested watersheds. This investigation evaluates the accuracy and consistency of the method using rainfall-runoff series from 10 small forested-mountainous watersheds in the eastern...
Segmentation of Object Outlines into Parts: A Large-Scale Integrative Study
ERIC Educational Resources Information Center
De Winter, Joeri; Wagemans, Johan
2006-01-01
In this study, a large number of observers (N=201) were asked to segment a collection of outlines derived from line drawings of everyday objects (N=88). This data set was then used as a benchmark to evaluate current models of object segmentation. All of the previously proposed rules of segmentation were found supported in our results. For example,…
ERIC Educational Resources Information Center
Rutkowski, David J.; Prusinski, Ellen L.
2011-01-01
The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…
Vianna, Juliana A.; Noll, Daly; Mura-Jornet, Isidora; Valenzuela-Guerra, Paulina; González-Acuña, Daniel; Navarro, Cristell; Loyola, David E.; Dantas, Gisele P. M.
2017-01-01
Abstract Microsatellites are valuable molecular markers for evolutionary and ecological studies. Next generation sequencing is responsible for the increasing number of microsatellites for non-model species. Penguins of the Pygoscelis genus are comprised of three species: Adélie (P. adeliae), Chinstrap (P. antarcticus) and Gentoo penguin (P. papua), all distributed around Antarctica and the sub-Antarctic. The species have been affected differently by climate change, and the use of microsatellite markers will be crucial to monitor population dynamics. We characterized a large set of genome-wide microsatellites and evaluated polymorphisms in all three species. SOLiD reads were generated from the libraries of each species, identifying a large amount of microsatellite loci: 33,677, 35,265 and 42,057 for P. adeliae, P. antarcticus and P. papua, respectively. A large number of dinucleotide (66,139), trinucleotide (29,490) and tetranucleotide (11,849) microsatellites are described. Microsatellite abundance, diversity and orthology were characterized in penguin genomes. We evaluated polymorphisms in 170 tetranucleotide loci, obtaining 34 polymorphic loci in at least one species and 15 polymorphic loci in all three species, which allow to perform comparative studies. Polymorphic markers presented here enable a number of ecological, population, individual identification, parentage and evolutionary studies of Pygoscelis, with potential use in other penguin species. PMID:28898354
A Systematic Review of Challenging Behaviors in Children Exposed Prenatally to Substances of Abuse
ERIC Educational Resources Information Center
Dixon, Dennis R.; Kurtz, Patricia F.; Chin, Michelle D.
2008-01-01
A review of the existing literature on the occurrence of challenging behavior among children with prenatal drug exposure was conducted. While a large number of studies were identified that evaluated various outcomes of prenatal drug exposure, only 37 were found that directly evaluated challenging behaviors. Of the 37 studies, 23 focused on…
ERIC Educational Resources Information Center
Lesaux, Nonie K.; Harris, Julie Russ; Sloane, Phoebe
2012-01-01
In a large urban district's ELA classrooms, an academic vocabulary intervention designed to improve linguistically diverse 6th-graders' reading and language skills was implemented and evaluated. These classrooms were characterized by high numbers of struggling readers, and linguistic diversity was the norm. As part of the evaluation, this study…
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Evaluation of roadway sites for queue management.
DOT National Transportation Integrated Search
1991-01-01
This study addresses the problem of queueing on highway facilities, wherein a large number of computerized methods for the analysis of different queueing situations are available. A three-tier classification system of the methodologies was used with ...
Forward Collision Warning Systems (CWS)
DOT National Transportation Integrated Search
2005-07-01
The Federal Motor Carrier Safety Administrations (FMCSAs) safety goal is to reduce the number and severity of large truck fatalities and crashes. During the last several years, FMCSA has collaborated with the trucking industry to test and evalu...
LOX/hydrocarbon auxiliary propulsion system study
NASA Technical Reports Server (NTRS)
Orton, G. F.; Mark, T. D.; Weber, D. D.
1982-01-01
Liquid oxygen/hydrocarbon propulsion systems applicable to a second generation orbiter OMS/RCS were compared, and major system/component options were evaluated. A large number of propellant combinations and system concepts were evaluated. The ground rules were defined in terms of candidate propellants, system/component design options, and design requirements. System and engine component math models were incorporated into existing computer codes for system evaluations. The detailed system evaluations and comparisons were performed to identify the recommended propellant combination and system approach.
Castiel, D; Herve, C
1992-01-01
In general, a large number of patients is needed to conclude whether the results of a therapeutic strategy are significant or not. One can lower this number with a logit. The method has been proposed in an article published recently (Cost-utility analysis of early thrombolytic therapy, Pharmaco Economics, 1992). The present article is an essay aimed at validating the method, both from the econometric and ethical points of view.
2015-10-01
practical examination of current methods,” J. Biomech., Oct. 2015. [8] R. J. Nesbitt, S . T. Herfat, D. V. Boguszewski, A . J. Engel, M . T. Galloway, and J... a Sheep Model 5b. GRANT NUMBER W81XWH-13-1-0324 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Christopher H. Evans, Ph.D. 5d. PROJECT NUMBER...interfragmentary movement ( IFM ) through the separated bone cortices (fracture gap). In research funded by a CDMRP Idea Development Award, we used a
Andrew M. Minnis; Daniel L. Lindner
2013-01-01
White-nose syndrome (WNS) of bats, caused by the fungus previously known as Geomyces destructans, has decimated populations of insectivorous bats in eastern North America. Recent work on fungi associated with bat hibernacula uncovered a large number of species of Geomyces and allies, far exceeding the number of described species....
Process service quality evaluation based on Dempster-Shafer theory and support vector machine.
Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei
2017-01-01
Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
Kelleher, Maureen E; Puchalski, Sarah M; Drake, Christiana; le Jeune, Sarah S
2014-07-01
To evaluate the sensitivity and specificity of direct digital abdominal radiography for the diagnosis of enterolithiasis in equids and to assess the effect of the number and anatomic location of enteroliths and gas distention of the gastrointestinal tract on diagnostic sensitivity of the technique. Retrospective case series. 238 horses and ponies ≥ 1 year old that underwent digital abdominal radiography with subsequent exploratory celiotomy or postmortem examination. For each case, 3 reviewers independently evaluated radiographic views. Radiographic images were evaluated for presence or absence and location of enteroliths and the degree of gas distention. Signalment, definitive diagnosis based on exploratory celiotomy or postmortem examination findings, and number and anatomic location of enteroliths were obtained from the medical records. 70 of the 238 (29.4%) equids had confirmed enterolithiasis. With regard to diagnosis of enterolithiasis via digital radiography, overall sensitivity and specificity for the 3 reviewers were 84% and 96%, respectively. Sensitivity was lower for small colon enteroliths (61.5%) than for large colon enteroliths (88.9%) and was negatively affected by gas distention of the gastrointestinal tract. Sensitivity was not affected by the number of enteroliths. Sensitivity and specificity of digital radiography for the diagnosis of large colon enterolithiasis in equids was high. Sensitivity of digital radiography for detection of small colon enteroliths was lower than that for large colon enteroliths, but was higher than that typically associated with computed radiography. In geographic regions in which enterolithiasis in equids is endemic, digital abdominal radiography could be used as a diagnostic test for equids with colic.
Willemsen, Marjolein H; de Leeuw, Nicole; de Brouwer, Arjan P M; Pfundt, Rolph; Hehir-Kwa, Jayne Y; Yntema, Helger G; Nillesen, Willy M; de Vries, Bert B A; van Bokhoven, Hans; Kleefstra, Tjitske
2012-11-01
Genome-wide array studies are now routinely being used in the evaluation of patients with cognitive disorders (CD) and/or congenital anomalies (CA). Therefore, inevitably each clinician is confronted with the challenging task of the interpretation of copy number variations detected by genome-wide array platforms in a diagnostic setting. Clinical interpretation of autosomal copy number variations is already challenging, but assessment of the clinical relevance of copy number variations of the X-chromosome is even more complex. This study provides an overview of the X-Chromosome copy number variations that we have identified by genome-wide array analysis in a large cohort of 4407 male and female patients. We have made an interpretation of the clinical relevance of each of these copy number variations based on well-defined criteria and previous reports in literature and databases. The prevalence of X-chromosome copy number variations in this cohort was 57/4407 (∼1.3%), of which 15 (0.3%) were interpreted as (likely) pathogenic. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
An evaluation of some alternative approaches for reducing fan tone noise
NASA Astrophysics Data System (ADS)
Dittmar, James H.; Woodward, Richard P.
1992-02-01
The potential of two alternative approaches for reducing fan ton noise was investigated in this study. One of these approaches increases the number of rotor blades to shift the tone noise to higher frequencies that are not rated as strongly by the perceived noise scale. This alternative fan also would have a small number of long chord stator vanes which would reduce the stator response and lower rotor-stator interaction noise. Comparison of the conventional and alternative fan concepts showed that this alternative approach has as large or larger a perceived tone noise reduction potential as the conventional approach. The other alternative, a high Mach number inlet, is evaluated both for its noise attenuation and for its change in noise directivity.
Can Zebrafish be used to Identify Developmentally Neurotoxic Chemicals
Can Zebrafish be Used to Identify Developmentally Neurotoxic Chemicals? The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental neurotoxicity. We are exploring behavioral methods using zebrafish by desig...
Global Statistics of Bolides in the Terrestrial Atmosphere
NASA Astrophysics Data System (ADS)
Chernogor, L. F.; Shevelyov, M. B.
2017-06-01
Purpose: Evaluation and analysis of distribution of the number of meteoroid (mini asteroid) falls as a function of glow energy, velocity, the region of maximum glow altitude, and geographic coordinates. Design/methodology/approach: The satellite database on the glow of 693 mini asteroids, which were decelerated in the terrestrial atmosphere, has been used for evaluating basic meteoroid statistics. Findings: A rapid decrease in the number of asteroids with increasing of their glow energy is confirmed. The average speed of the celestial bodies is equal to about 17.9 km/s. The altitude of maximum glow most often equals to 30-40 km. The distribution law for a number of meteoroids entering the terrestrial atmosphere in longitude and latitude (after excluding the component in latitudinal dependence due to the geometry) is approximately uniform. Conclusions: Using a large enough database of measurements, the meteoroid (mini asteroid) statistics has been evaluated.
Visual counts as an index of White-Tailed Prairie Dog density
Menkens, George E.; Biggins, Dean E.; Anderson, Stanley H.
1990-01-01
Black-footed ferrets (Mustela nigripes) are depended on prairie dogs (Cynomys spp.) for food and shelter and were historically restricted to prairie dog towns (Anderson et al. 1986). Because ferrets and prairie dogs are closely associated, successful ferret management and conservation depends on successful prairie dog management. A critical component of any management program for ferrets will be monitoring prairie dog population dynamics on towns containing ferrets or on towns proposed as ferret reintroduction sites. Three techniques for estimating prairie dog population size and density are counts of plugged and reopened burrows (Tietjen and Matschke 1982), mark-recapture (Otis et al. 1978; Seber 1982, 1986; Menkens and Anderson 1989), and visual counts (Fagerstone and Biggins 1986, Knowles 1986). The technique of plugging burrows and counting the number reopened by prairie dogs is too time and labor intensive for population evaluation on a large number of towns or over large areas. Total burrow counts are not correlated with white-tailed prairie dog (C. leucurus) densities and thus cannot be used for populated evaluation (Menkens et al. 1988). Mark-recapture requires trapping that is expensive and time and labor intensive. Monitoring a large number of prairie dog populations using mark-recapture would be difficult. Alternatively a large number of populations could be monitored in short periods of time using the visual count technique (Fagerstone and Biggins 1986, Knowles 1986). However, the accuracy of visual counts has only been evaluated in a few locations. Thus, it is not known whether the relationship between counts and prairie dog density is consistent throughout the prairie dog's range. Our objective was to evaluate the potential of using visual counts as a rapid means of estimating white-tailed prairie dog density in prairie dog towns throughout Wyoming. We studied 18 white-tailed prairie dog towns in 4 white-tailed prairie dog complexes in Wyoming near Laramie (105°40'W, 41°20'N, 3 grids), Pathfinder reservoir (106°55'W, 42°30'N, 6 grids), Shirley Basin (106°10'W, 42°20'N, 6 grids), and Meeteetse (108°10'W, 44°10'N, 3 grids). All towns were dominated by grasses, forbs, and shrubs (details in Collins and Lichvar 1986). Topography of towns ranged from flat to gently rolling hills.
How do tablet properties influence swallowing behaviours?
Yamamoto, Shinya; Taniguchi, Hiroshige; Hayashi, Hirokazu; Hori, Kazuhiro; Tsujimura, Takanori; Nakamura, Yuki; Sato, Hideaki; Inoue, Makoto
2014-01-01
Behavioural performance of tablet swallowing was evaluated with different tablet conditions in terms of size, number and surface coating. Four different types of tablets were prepared: small or large, and with or without a surface coating. Fourteen normal male adults were instructed to swallow the prepared tablets with 15 ml of water. The number of tablets in one trial was changed from one to three. To evaluate swallowing and tablet transport, electromyographic activity was recorded in the left suprahyoid muscles, and videofluorographic images were examined. All tablet conditions (size, number and surface coating) affected the swallowing performance in terms of total number of swallows, electromyographic burst patterns and location of remaining tablets. Increases in the size and number of tablets increased the number of swallows and electromyographic burst area and duration. In addition, all of these parameters increased while swallowing tablets without a coating compared with tablets with a coating. Location of the remaining tablets was mainly within the mouth. This study only clarified the normal pattern of tablet swallowing under several conditions in healthy subjects, but the results may facilitate comprehensive evaluation and treatment planning in terms of administering medication to dysphagic patients. © 2013 Royal Pharmaceutical Society.
Streamlining Building Efficiency Evaluation with DOE's Asset Score Preview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Wang, Nora; Gonzalez, Juan
2016-08-26
Building Energy Asset Score (Asset Score), developed by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE), is a tool to help building owners and managers assess the efficiency of a building's energy-related systems and encourage investment in cost-effective improvements. The Asset Score uses an EnergyPlus model to provide a quick assessment of building energy performance with minimum user inputs of building characteristics and identifies upgrade opportunities. Even with a reduced set of user inputs, data collection remains a challenge for wide-spread adoption, especially when evaluating a large number of buildings. To address this, Asset Scoremore » Preview was developed to allow users to enter as few as seven building characteristics to quickly assess their buildings before a more in-depth analysis. A streamlined assessment from Preview to full Asset Score provides an easy entry point and also enables users who manage a large number of buildings to screen and prioritize buildings that can benefit most from a more detailed evaluation and possible energy efficiency upgrades without intensive data collection.« less
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
Automatic trajectory measurement of large numbers of crowded objects
NASA Astrophysics Data System (ADS)
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
Kim, Augustine Yongwhi; Choi, Hoduk
2018-01-01
The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation. PMID:29606960
Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon
2018-01-01
The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.
2016-10-01
characterization of a novel external fixator for dynamizing ovine osseous defects. Poster No. 2185. Orthopedic Research Society Annual Meeting, Orlando...REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT...SPONSOR/MONITOR’S ACRONYM(S) U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 11. SPONSOR/MONITOR’S REPORT NUMBER(S
The Shock and Vibration Digest. Volume 14, Number 12
1982-12-01
to evaluate the uses of statistical energy analysis for determining sound transmission performance. Coupling loss factors were mea- sured and compared...measurements for the artificial (Also see No. 2623) cracks in mild-steel test pieces. 82-2676 Ihprovement of the Method of Statistical Energy Analysis for...eters, using a large number of free-response time histories In the application of the statistical energy analysis theory simultaneously in one analysis
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1982-01-01
The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.
Characterization and prediction of residues determining protein functional specificity.
Capra, John A; Singh, Mona
2008-07-01
Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular functional specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolutionary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/. Supplementary data are available at Bioinformatics online.
Eosinophilic Esophagitis (EoE)
... the main cause of EoE in a large number of patients. Allergists are experts in evaluating and treating EoE related to food allergies. However the relationship between food allergy and EoE is complex. In many types of food allergy, the triggers ...
ERIC Educational Resources Information Center
Vander Weele, Maribeth
1992-01-01
Thomas Hehir, special education chief of Chicago Public Schools, is evangelist of integrating children with disabilities into regular classrooms. By completely reorganizing department viewed as political patronage dumping ground, Hehir has made remarkable progress in handling large number of children awaiting evaluation and placement in special…
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael
2016-01-01
Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
DOT National Transportation Integrated Search
2001-02-19
The Global Positioning System (GPS) is a satellite based radio-navigation system. A relatively large number of vehicles are already equipped with GPS devices. This project evaluated the application of Global Positing System (GPS) technology in collis...
Phenotypic screening for developmental neurotoxicity: mechanistic data at the level of the cell
There are large numbers of environmental chemicals with little or no available information on their toxicity, including developmental neurotoxicity. Because of the resource-intensive nature of traditional animal tests, high-throughput (HTP) methods that can rapidly evaluate chemi...
Studies on the Behavior of Larval Zebrafish for Developmental Neurotoxicity Screening
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to detect developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral paradig...
NASA Technical Reports Server (NTRS)
Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.
1980-01-01
A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Durkin, Michael J; Feng, Qianxi; Warren, Kyle; Lockhart, Peter B; Thornhill, Martin H; Munshi, Kiraat D; Henderson, Rochelle R; Hsueh, Kevin; Fraser, Victoria J
2018-05-01
The purpose of this study was to assess dental antibiotic prescribing trends over time, to quantify the number and types of antibiotics dentists prescribe inappropriately, and to estimate the excess health care costs of inappropriate antibiotic prescribing with the use of a large cohort of general dentists in the United States. We used a quasi-Poisson regression model to analyze antibiotic prescriptions trends by general dentists between January 1, 2013, and December 31, 2015, with the use of data from Express Scripts Holding Company, a large pharmacy benefits manager. We evaluated antibiotic duration and appropriateness for general dentists. Appropriateness was evaluated by reviewing the antibiotic prescribed and the duration of the prescription. Overall, the number and rate of antibiotic prescriptions prescribed by general dentists remained stable in our cohort. During the 3-year study period, approximately 14% of antibiotic prescriptions were deemed inappropriate, based on the antibiotic prescribed, antibiotic treatment duration, or both indicators. The quasi-Poisson regression model, which adjusted for number of beneficiaries covered, revealed a small but statistically significant decrease in the monthly rate of inappropriate antibiotic prescriptions by 0.32% (95% confidence interval, 0.14% to 0.50%; P = .001). Overall antibiotic prescribing practices among general dentists in this cohort remained stable over time. The rate of inappropriate antibiotic prescriptions by general dentists decreased slightly over time. From these authors' definition of appropriate antibiotic prescription choice and duration, inappropriate antibiotic prescriptions are common (14% of all antibiotic prescriptions) among general dentists. Further analyses with the use of chart review, administrative data sets, or other approaches are needed to better evaluate antibiotic prescribing practices among dentists. Copyright © 2018 American Dental Association. Published by Elsevier Inc. All rights reserved.
A large-scale photonic node architecture that utilizes interconnected OXC subsystems.
Iwai, Yuto; Hasegawa, Hiroshi; Sato, Ken-ichi
2013-01-14
We propose a novel photonic node architecture that is composed of interconnected small-scale optical cross-connect subsystems. We also developed an efficient dynamic network control algorithm that complies with a restriction on the number of intra-node fibers used for subsystem interconnection. Numerical evaluations verify that the proposed architecture offers almost the same performance as the equivalent single large-scale cross-connect switch, while enabling substantial hardware scale reductions.
Community-acquired pneumonia: identification and evaluation of nonresponders.
Gonçalves-Pereira, João; Conceição, Catarina; Póvoa, Pedro
2013-02-01
Community acquired pneumonia (CAP) is a relevant public health problem, constituting an important cause of morbidity and mortality. It accounts for a significant number of adult hospital admissions and a large number of those patients ultimately die, especially the population who needed mechanical ventilation or vasopressor support. Thus, early identification of CAP patients and its rapid and appropriate treatment are important features with impact on hospital resource consumption and overall mortality. Although CAP diagnosis may sometimes be straightforward, the diagnostic criteria commonly used are highly sensitive but largely unspecific. Biomarkers and microbiological documentation may be useful but have important limitations. Evaluation of clinical response is also critical especially to identify patients who fail to respond to initial treatment since these patients have a high risk of in-hospital death. However, the criteria of definition of non-response in CAP are largely empirical and frequently markedly diverse between different studies. In this review, we aim to identify criteria defining nonresponse in CAP and the pitfalls associated with this diagnosis. We also aim to overview the main causes of treatment failure especially in severe CAP and the possible strategies to identify and reassess non-responders trying to change the dismal prognosis associated with this condition.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Neuronal models for evaluation of proliferation in vitro using high content screening
In vitro test methods can provide a rapid approach for the screening of large numbers of chemicals for their potential to produce toxicity (hazard identification). In order to identify potential developmental neurotoxicants, a battery of in vitro tests for neurodevelopmental proc...
Economics of residue harvest: Regional partnership evaluation
USDA-ARS?s Scientific Manuscript database
Economic analyses on the viability of corn (Zea mays, L.) stover harvest for bioenergy production have largely been based on simulation modeling. While some studies have utilized field research data, most field-based analyses have included a limited number of sites and a narrow geographic distributi...
DOT National Transportation Integrated Search
1971-04-01
During 1960-1963, the Civil Aeromedical Research Institute (CARI) conducted a broad spectrum of biomedical evaluations on a large number of air traffic control (ATC) students. Approximately 1270 of these students (20-50 years of age) underwent biodyn...
DOT National Transportation Integrated Search
2017-12-01
Traditionally, highway agencies relied mainly on man-entry approach for assessing in-service conditions of their culverts. And, this direct approach left many drainage structures unapproachable and uninspected. This is because a large number of drain...
Assessing Locomotor Activity in Larval Zebrafish: Influence of Extrinsic and Intrinsic Variables
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to detect developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral paradig...
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. As such, we are exploring a behavioral testing paradigm, which can assess the effect of sublethal and subteratogenic concentrations of de...
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to detect developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral paradig...
The U.S. Environmental Protection Agency is developing and evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. Towards this goal, we are exploring methods to detect developmental neurotoxicants in very young larval zebrafish. We have...
Understanding potential health risks posed by environmental chemicals is a significant challenge elevated by large numbers of diverse chemicals with generally uncharacterized exposures, mechanisms and toxicities. The present study is a performance evaluation and critical analysis...
Statistical and clustering analysis for disturbances: A case study of voltage dips in wind farms
Garcia-Sanchez, Tania; Gomez-Lazaro, Emilio; Muljadi, Eduard; ...
2016-01-28
This study proposes and evaluates an alternative statistical methodology to analyze a large number of voltage dips. For a given voltage dip, a set of lengths is first identified to characterize the root mean square (rms) voltage evolution along the disturbance, deduced from partial linearized time intervals and trajectories. Principal component analysis and K-means clustering processes are then applied to identify rms-voltage patterns and propose a reduced number of representative rms-voltage profiles from the linearized trajectories. This reduced group of averaged rms-voltage profiles enables the representation of a large amount of disturbances, which offers a visual and graphical representation ofmore » their evolution along the events, aspects that were not previously considered in other contributions. The complete process is evaluated on real voltage dips collected in intense field-measurement campaigns carried out in a wind farm in Spain among different years. The results are included in this paper.« less
Flow-aggregated traffic-driven label mapping in label-switching networks
NASA Astrophysics Data System (ADS)
Nagami, Kenichi; Katsube, Yasuhiro; Esaki, Hiroshi; Nakamura, Osamu
1998-12-01
Label switching technology enables high performance, flexible, layer-3 packet forwarding based on the fixed length label information mapped to the layer-3 packet stream. A Label Switching Router (LSR) forwards layer-3 packets based on their label information mapped to the layer-3 address information as well as their layer-3 address information. This paper evaluates the required number of labels under traffic-driven label mapping policy using the real backbone traffic traces. The evaluation shows that the label mapping policy requires a large number of labels. In order to reduce the required number of labels, we propose a label mapping policy which is a traffic-driven label mapping for the traffic toward the same destination network. The evaluation shows that the proposed label mapping policy requires only about one tenth as many labels compared with the traffic-driven label mapping for the host-pair packet stream,and the topology-driven label mapping for the destination network packet stream.
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
The Toxicant-Target Paradigm for Toxicity Screening – Pharmacophore Based Constraints
There is a compelling need to develop information for the screening and prioritization of the health and environmental effects of large numbers of man-made chemicals. Knowledge of the potential pathways for activity provides a rational basis for the preliminary evaluation of ris...
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. As part of this approach, it is important to be able to separate overt toxicity (Le., malformed larvae) from the more specific neurotoxic...
High-throughput and high-content screens are attractive approaches for prioritizing nanomaterial hazards and informing targeted testing due to the impracticality of using traditional toxicological testing on the large numbers and varieties of nanomaterials. The ToxCast program a...
Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.
ERIC Educational Resources Information Center
Colley, J. R.; And Others
1996-01-01
Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
ASSESSMENT OF SYNAPSE FORMATION IN RAT PRIMARY NEURAL CELL CULTURE USING HIGH CONTENT MICROSCOPY.
Cell-based assays can model neurodevelopmental processes including neurite growth and synaptogenesis, and may be useful for screening and evaluation of large numbers of chemicals for developmental neurotoxicity. This work describes the use of high content screening (HCS) to dete...
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to screen for developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral par...
Experimental Studies on Hypersonic Stagnation Point Chemical Environment
2006-02-01
conditions [60]. Having this complete definition we will focus on the chemical environment produce in the SPR. 3.2 Chemical environment evaluation Flow ... chemistry involves a very large number of processes and microscopic phenomena, they are usually summarized in a set of chemical reactions, with their own
Traditional toxicity testing provides insight into the mechanisms underlying toxicological responses but requires a high investment in a large number of resources. The new paradigm of testing approaches involves rapid screening studies able to evaluate thousands of chemicals acro...
Species sensitivity distributions (SSD) require a large number of measured toxicity values to define a chemical’s toxicity to multiple species. This investigation comprehensively evaluated the accuracy of SSDs generated from toxicity values predicted from interspecies correlation...
NASA Technical Reports Server (NTRS)
Beasley, W. D.; Mcghee, R. J.
1977-01-01
Exploratory wind tunnel tests were conducted on a large chord aircraft wing panel to evaluate the potential for drag reduction resulting from the application of a thin plastic film cover. The tests were conducted at a Mach number of 0.15 over a Reynolds number range from about 7 x 10 to the 6th power to 63 x 10 to the 6th power.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
Maret, Terry R.; Ott, D.S.
2004-01-01
width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Gregory; Grimes, Joshua; Fletcher, Joel
2016-01-15
Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kVmore » beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.« less
Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Halaweish, Ahmed; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia
2016-01-01
The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. The authors' report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin
2015-02-01
When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.
Moores, Carly Jane; Miller, Jacqueline; Perry, Rebecca Anne; Chan, Lily Lai Hang; Daniels, Lynne Allison; Vidgen, Helen Anna; Magarey, Anthea Margaret
2017-11-29
Translation encompasses the continuum from clinical efficacy to widespread adoption within the healthcare service and ultimately routine clinical practice. The Parenting, Eating and Activity for Child Health (PEACH™) program has previously demonstrated clinical effectiveness in the management of child obesity, and has been recently implemented as a large-scale community intervention in Queensland, Australia. This paper aims to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to large-scale community intervention (PEACH™ QLD). Tensions between RCT paradigm and implementation research will be discussed along with lived evaluation challenges, responses to overcome these, and key learnings for future evaluation conducted at scale. The translation of evaluation from PEACH™ RCT to the large-scale community intervention PEACH™ QLD is described. While the CONSORT Statement was used to report findings from two previous RCTs, the REAIM framework was more suitable for the evaluation of upscaled delivery of the PEACH™ program. Evaluation of PEACH™ QLD was undertaken during the project delivery period from 2013 to 2016. Experiential learnings from conducting the evaluation of PEACH™ QLD to the described evaluation framework are presented for the purposes of informing the future evaluation of upscaled programs. Evaluation changes in response to real-time changes in the delivery of the PEACH™ QLD Project were necessary at stages during the project term. Key evaluation challenges encountered included the collection of complete evaluation data from a diverse and geographically dispersed workforce and the systematic collection of process evaluation data in real time to support program changes during the project. Evaluation of large-scale community interventions in the real world is challenging and divergent from RCTs which are rigourously evaluated within a more tightly-controlled clinical research setting. Constructs explored in an RCT are inadequate in describing the enablers and barriers of upscaled community program implementation. Methods for data collection, analysis and reporting also require consideration. We present a number of experiential reflections and suggestions for the successful evaluation of future upscaled community programs which are scarcely reported in the literature. PEACH™ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).
ERIC Educational Resources Information Center
Sperazi, Laura; And Others
The Massachusetts Workplace Education Initiative (MWEI) was begun in 1985 as an inter-agency effort to bring adult basic education and English-as-a-Second-Language (ESL) instruction directly into workplaces throughout the state. The impetus for the program was a concern for large numbers of workers who did not have the skills necessary to compete…
Antolini, Ermete
2017-02-13
Combinatorial chemistry and high-throughput screening represent an innovative and rapid tool to prepare and evaluate a large number of new materials, saving time and expense for research and development. Considering that the activity and selectivity of catalysts depend on complex kinetic phenomena, making their development largely empirical in practice, they are prime candidates for combinatorial discovery and optimization. This review presents an overview of recent results of combinatorial screening of low-temperature fuel cell electrocatalysts for methanol oxidation. Optimum catalyst compositions obtained by combinatorial screening were compared with those of bulk catalysts, and the effect of the library geometry on the screening of catalyst composition is highlighted.
NASA Technical Reports Server (NTRS)
Rock, M.; Kunigahalli, V.; Khan, S.; Mcnair, A.
1984-01-01
Sealed nickel cadmium cells having undergone a large number of cycles were discharged using the Hg/HgO reference electrode. The negative electrode exhibited the second plateau. SEM of negative plates of such cells show clusters of large crystals of cadmium hydroxide. These large crystals on the negative plates disappear after continuous overcharging in flooded cells. Atomic Absorption Spectroscopy and standard wet chemical methods are being used to determine the cell materials viz: nickel, cadmium, cobalt, potassum and carbonate. The anodes and cathodes are analyzed after careful examination and the condition of the separator material is evaluated.
Mosquito repellent attracts Culicoides imicola (Diptera: Ceratopogonidae).
Braverman, Y; Chizov-Ginzburg, A; Mullens, B A
1999-01-01
A plant-derived mosquito repellent, based on the oil of Eucalyptus maculata var. citriodora Hook, was evaluated against the biting midge Culicoides imicola Kieffer. Suction black light-traps covered with repellent-impregnated polyester mesh and deployed near horses attracted large numbers of C. imicola, which were seen near the treated net within a few minutes of the start of the experiment. Initial collections in the traps were approximately 3 times as large as those in control traps with untreated mesh. Numbers collected in treated traps were similar to untreated control traps after 4 h. Traps with mesh treated with DEET or another plant-derived (Meliaceae) proprietary product, AG1000, acted as repellents relative to the control. The differential activity of repellents against blood-feeding Diptera is discussed.
Exact diagonalization of quantum lattice models on coprocessors
NASA Astrophysics Data System (ADS)
Siro, T.; Harju, A.
2016-10-01
We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The Green Number Index technique which uses LANDSAT digital data from 5X6 nautical mile sampling frames was expanded to evaluate its usefulness in detecting and monitoring vegetative water stress over the Great Plains. At known growth stages for wheat, segments were classified as drought or non drought. Good agreement was found between the 18 day remotely sensed data and a weekly ground-based crop moisture index. Operational monitoring of the 1977 U.S.S.R. and Australian wheat crops indicated drought conditions. Drought isoline maps produced by the Green Number Index technique were in good agreement with conventional sources.
Simulations Using Random-Generated DNA and RNA Sequences
ERIC Educational Resources Information Center
Bryce, C. F. A.
1977-01-01
Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…
A Profile of Bullying at School.
ERIC Educational Resources Information Center
Olweus, Dan
2003-01-01
Describes basic facts and common myths about bullying. Covers the key principles of the Olweus Bulling Prevention Program. Discusses the results of research-based evaluations of the Olweus Program. Describes Norway's recent national initiative against bullying, which includes the use of the Olweus Program in a large number of elementary and junior…
Evaluating Perspectives on Westward Expansion: Weighing the Evidence
ERIC Educational Resources Information Center
Greenhut, Stephanie
2011-01-01
When Americans from the eastern part of the United States began moving west in large numbers in the mid-nineteenth century, tensions escalated and conflicts erupted between and among settlers, railroad workers, ranchers, the United States military, and numerous Native American tribes. Incorporating balanced consideration of these diverse and…
EVALUATION OF FUEL CELL AUXILIARY POWER UNITS FOR HEAVY-DUTY DIESEL TRUCKS
A large number of heavy-duty trucks idle a significant amount. Heavy-duty line-haul truck engines idle about 30-50% of the time the engine is running. Drivers idle engines to power climate control devices (e.g., heaters and air conditioners) and sleeper compartment accessories (e...
Qualitative Meta-Analysis on the Hospital Task: Implications for Research
ERIC Educational Resources Information Center
Noll, Jennifer; Sharma, Sashi
2014-01-01
The "law of large numbers" indicates that as sample size increases, sample statistics become less variable and more closely estimate their corresponding population parameters. Different research studies investigating how people consider sample size when evaluating the reliability of a sample statistic have found a wide range of…
15 CFR 290.6 - Proposal evaluation and selection criteria.
Code of Federal Regulations, 2011 CFR
2011-01-01
... qualified proposals in accordance with the following criteria, assigning equal weight to each of the four... population of manufacturers and the technology to be addressed justify it. (2) Technology resources. Does the... organizations, and state governments who will amplify the Center's technology delivery to reach a large number...
A Comparison of Missing-Data Procedures for Arima Time-Series Analysis
ERIC Educational Resources Information Center
Velicer, Wayne F.; Colby, Suzanne M.
2005-01-01
Missing data are a common practical problem for longitudinal designs. Time-series analysis is a longitudinal method that involves a large number of observations on a single unit. Four different missing-data methods (deletion, mean substitution, mean of adjacent observations, and maximum likelihood estimation) were evaluated. Computer-generated…
Group to Use Chemistry to Solve Developing Countries' Ills.
ERIC Educational Resources Information Center
O'Sullivan, Dermot A.
1983-01-01
Chemical engineers have begun savoring the first fruits of a massive effort to gather, determine, and evaluate data of physical properties and predictive methods for large numbers of compounds and mixtures processed in the chemical industry. The use of this centralized data source is highlighted. (Author/JN)
Principal Appraisals Get a Remake
ERIC Educational Resources Information Center
Zubrzycki, Jaclyn
2013-01-01
A growing number of school districts--including large ones like those in Chicago, Dallas, Los Angeles, and Hawaii--have become recent converts to new principal-evaluation systems that tie school leaders' appraisals to student test scores. As of this school year, student achievement accounts for 40 percent to 50 percent of principals' evaluations…
Prison Volunteers: Profiles, Motivations, Satisfaction
ERIC Educational Resources Information Center
Tewksbury, Richard; Dabney, Dean
2004-01-01
Large numbers of correctional institutions rely on volunteers to assist staff in various programs and tasks. At present there exists a paucity of literature describing these programs and/or subjecting them to systematic evaluation. The present study uses self-report data from a sample of active volunteers at a medium-security Southern prison to…
DOT National Transportation Integrated Search
2017-06-01
The long-term performance of pothole patches largely depends on the selection of the patching method. A number of pothole patching methods are in practice in Minnesota and other nearby states. However, pavement maintenance crews often encounter probl...
Testing of environmental and industrial chemicals for toxicity potential is a daunting task because of the wide range of possible toxicity mechanisms. Although animal testing is one means of achieving broad toxicity coverage, evaluation of large numbers of chemicals is challengin...
Evaluation of PLS, LS-SVM, and LWR for quantitative spectroscopic analysis of soils
USDA-ARS?s Scientific Manuscript database
Soil testing requires the analysis of large numbers of samples in laboratory that are often time consuming and expensive. Mid-infrared spectroscopy (mid-IR) and near-infrared spectroscopy (NIRS) are fast, non-destructive, and inexpensive analytical methods that have been used for soil analysis, in l...
Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. Current approaches are too resource intensive in terms of time, money and animal use to evaluate all chemicals under development or alread...
A CONCEPTUAL MODEL FOR EVALUATING RELATIVE POTENCY DATA FOR USE IN ECOLOGICAL RISK ASSESSMENTS
For chemicals with a common mechanism of toxicity, relative potency factors (RPFs) allow dose and exposure measures to be normalized to an equivalent toxicity amount of a model chemical... In ecological risk assessments the large number of possible target species, variety of expo...
USDA-ARS?s Scientific Manuscript database
Molecular field topology analysis, scaffold hopping, and molecular docking were used as complementary computational tools for the design of repellents for Aedes aegypti, the insect vector for yellow fever, West Nile fever, and dengue fever. A large number of analogues were evaluated by virtual scree...
How Effective Is the Multidisciplinary Approach? A Follow-Up Study.
ERIC Educational Resources Information Center
Hochstadt, Neil J.; Harwicke, Neil J.
1985-01-01
The effectiveness of the multidisciplinary approach was assessed by examining the number of recommended services obtained by 180 children one year after multidisciplinary evaluation. Results indicated that a large percentage of services recommended were obtained, compared with the low probability reported in samples of abused and neglected…
Evaluating Instructional Design Models: A Proposed Research Approach
ERIC Educational Resources Information Center
Gropper, George L.
2015-01-01
Proliferation of prescriptive models in an "engineering" field is not a sign of its maturity. Quite the opposite. Materials engineering, for example, meets the criterion of parsimony. Sadly, the very large number of models in "instructional design," putatively an engineering field, raises questions about its status. Can the…
Testing of environmental and industrial chemicals for toxicity potential is a daunting task because of the wide range of possible toxicity mechanisms. Although animal testing is one means of achieving broad toxicity coverage, evaluation of large numbers of chemicals is challengin...
The rationale for this research is: i) Protein expression changes with life stage, disease, tissue type and environmental stressors; ii) Technology allows rapid analysis of large numbers of proteins to provide protein expression profiles; iii) Protein profiles are used as specifi...
The US EPAs ToxCast Program for the Prioritization and Prediction of Environmental Chemical Toxicity
To meet the need for evaluating large numbers of chemicals for potential toxicity, the U.S. Environmental Protection Agency has initiated a research project call ToxCast that makes use of recent advances in molecular biology and high-throughput screening. These technologies have ...
Gore's Controversial Priorities for Higher Education.
ERIC Educational Resources Information Center
Gose, Ben
2000-01-01
Evaluates presidential candidate Al Gore's priorities for higher education, noting criticism by some educators of his emphasis on benefits for the middle class and the large number of specific proposals he has offered, including the College Opportunity Tax Cut, 21st Century Teachers' Corps, 401(j) Educational Savings Accounts, the National Tuition…
The role of colonic mast cells and myenteric plexitis in patients with diverticular disease.
Bassotti, Gabrio; Villanacci, Vincenzo; Nascimbeni, Riccardo; Antonelli, Elisabetta; Cadei, Moris; Manenti, Stefania; Lorenzi, Luisa; Titi, Amin; Salerni, Bruno
2013-02-01
Gut mast cells represent an important cell population involved in intestinal homeostasis and inflammatory processes. However, their possible role has not to date been investigated in colonic diverticular disease. This study aims to evaluate colonic mast cells in patients undergoing surgery for diverticular disease. Surgical resection samples from 27 patients undergoing surgery for diverticular disease (12 emergency procedures for severe disease and 15 elective procedures) were evaluated. The number of mast cells was assessed in the various layers by means of a specific antibody (tryptase) and compared with those evaluated in ten controls. In patients with mast cells degranulation, double immunohistochemistry, also assessing nerve fibres, was carried out. In addition, the presence of myenteric plexitis was sought. Compared with controls, the number of mast cells in diverticular patients was significantly increased, both as an overall figure and in the various layers of the large bowel. In patients in whom mast cells degranulation was present, these were always closed to nerve fibres. No differences were found between the two subgroups of patients with respect to the number and distribution of mast cells; however, all patients undergoing emergency surgery (but none of those undergoing elective procedures) had myenteric plexitis, represented by lymphocytic infiltration in 67 % and eosinophilic infiltration in 33 % of cases. Patients with diverticular disease display an increase of mast cells in the large bowel. The presence of myenteric plexitis in those with complicated, severe disease, suggest that this could represent a histopathologic marker of more aggressive disease.
Between-individual comparisons in performance evaluation: a perspective from prospect theory.
Wong, Kin Fai Ellick; Kwong, Jessica Y Y
2005-03-01
This article examines how between-individual comparisons influence performance evaluations in rating tasks. The authors demonstrated a systematic change in the perceived difference across ratees as a result of changing the way performance information is expressed. Study 1 found that perceived performance difference between 2 individuals was greater when their objective performance levels were presented with small numbers (e.g., absence rates of 2% vs. 5%) than when they were presented with large numbers (e.g., attendance rates of 98% vs. 95%). Extending this finding to situations involving trade-offs between multiple performance attributes across ratees, Study 2 showed that the relative preference for 1 ratee over another actually reversed when the presentation format of the performance information changed. The authors draw upon prospect theory to offer a theoretical framework describing the between-individual comparison aspect of performance evaluation.
Scotland, Kymora B; Rudnick, Benjamin; Healy, Kelly A; Hubosky, Scott G; Bagley, Demetrius H
2018-06-06
Advances in flexible ureteroscope design and accessory instrumentation have allowed for more challenging cases to be treated ureteroscopically. Here, we evaluate our experience with ureteroscopy (URS) for the management of large renal calculi (≥2 cm) and provide a concise review of recent reports. A retrospective review was undertaken of all URS cases between 2004 and 2014 performed by the endourologic team at a single academic tertiary care institution. We identified patients with at least one stone ≥2 cm managed with retrograde URS. Stone size was defined as the largest linear diameter of the index stone. Small diameter flexible ureteroscopes were used primarily with holmium laser. Patient demographics, intraoperative data, and postoperative outcomes were evaluated. We evaluated 167 consecutive patients who underwent URS for large renal stones ≥2 cm. The initial reason for choosing URS included patient preference (29.5%), failure of other therapies (8.2%), anatomic considerations/body habitus (30.3%), and comorbidities (28.8%). Mean patient age was 55.5 years (22-84). The mean stone size was 2.75 cm with mean number of procedures per patient of 1.65 (1-6). The single session stone-free rate was 57.1%, two-stage procedure stone-free rate was 90.2% and three-stage stone-free rate was 94.0%. Access sheaths were used in 47% of patients. An association was identified between stone size and patient outcomes; smaller stones correlated with decreased number of procedures. Postoperative complications were minor. Single or multi-stage retrograde ureteroscopic lithotripsy is a safe and effective mode of surgical management of large renal calculi. Total stone burden is a reliable predictor of the need for a staged procedure and of stone-free rate.
Pattern recognition methods and air pollution source identification. [based on wind direction
NASA Technical Reports Server (NTRS)
Leibecki, H. F.; King, R. B.
1978-01-01
Directional air samplers, used for resolving suspended particulate matter on the basis of time and wind direction were used to assess the feasibility of characterizing and identifying emission source types in urban multisource environments. Filters were evaluated for 16 elements and X-ray fluorescence methods yielded elemental concentrations for direction, day, and the interaction of direction and day. Large numbers of samples are necessary to compensate for large day-to-day variations caused by wind perturbations and/or source changes.
TOXCAST, A TOOL FOR CATEGORIZATION AND ...
Across several EPA Program Offices (e.g., OPPTS, OW, OAR), there is a clear need to develop strategies and methods to screen large numbers of chemicals for potential toxicity, and to use the resulting information to prioritize the use of testing resources towards those entities and endpoints that present the greatest likelihood of risk to human health and the environment. This need could be addressed using the experience of the pharmaceutical industry in the use of advanced modern molecular biology and computational chemistry tools for the development of new drugs, with appropriate adjustment to the needs and desires of environmental toxicology. A conceptual approach named ToxCast has been developed to address the needs of EPA Program Offices in the area of prioritization and screening. Modern computational chemistry and molecular biology tools bring enabling technologies forward that can provide information about the physical and biological properties of large numbers of chemicals. The essence of the proposal is to conduct a demonstration project based upon a rich toxicological database (e.g., registered pesticides, or the chemicals tested in the NTP bioassay program), select a fairly large number (50-100 or more chemicals) representative of a number of differing structural classes and phenotypic outcomes (e.g., carcinogens, reproductive toxicants, neurotoxicants), and evaluate them across a broad spectrum of information domains that modern technology has pro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulfan, R.M.; Vachal, J.D.
1978-02-01
A Preliminary Design Study of large turbulent flow military transport aircraft has been made. The study airplanes were designed to carry a heavy payload (350,000 lb) for a long range (10,000 nmi). The study tasks included: Wing geometry/cruise speed optimization of a large cantilever wing military transport airplane; Preliminary design and performance evaluation of a strut-braced wing transport airplane; and Structural analyses of large-span cantilever and strut-braced wings of graphite/epoxy sandwich construction (1985 technology). The best cantilever wing planform for minimum takeoff gross weight, and minimum fuel requirements, as determined using statistical weight evaluations, has a high aspect ratio, lowmore » sweep, low thickness/chord ratio, and a cruise Mach number of 0.76. A near optimum wing planform with greater speed capability (M = 0.78) has an aspect ratio = 12, quarter chord sweep = 20 deg, and thickness/chord ratio of 0.14/0.08 (inboard/outboard).« less
Ferreira, Rodrigo B; Coelli, Fernando C; Pereira, Wagner C A; Almeida, Renan M V R
2008-12-01
This study used the discrete-events computer simulation methodology to model a large hospital surgical centre (SC), in order to analyse the impact of increases in the number of post-anaesthetic beds (PABs), of changes in surgical room scheduling strategies and of increases in surgery numbers. The used inputs were: number of surgeries per day, type of surgical room scheduling, anaesthesia and surgery duration, surgical teams' specialty and number of PABs, and the main outputs were: number of surgeries per day, surgical rooms' use rate and blocking rate, surgical teams' use rate, patients' blocking rate, surgery delays (minutes) and the occurrence of postponed surgeries. Two basic strategies were implemented: in the first strategy, the number of PABs was increased under two assumptions: (a) following the scheduling plan actually used by the hospital (the 'rigid' scheduling - surgical rooms were previously assigned and assignments could not be changed) and (b) following a 'flexible' scheduling (surgical rooms, when available, could be freely used by any surgical team). In the second, the same analysis was performed, increasing the number of patients (up to the system 'feasible maximum') but fixing the number of PABs, in order to evaluate the impact of the number of patients over surgery delays. It was observed that the introduction of a flexible scheduling/increase in PABs would lead to a significant improvement in the SC productivity.
Evaluation of computed tomography numbers for treatment planning of lung cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mira, J.G.; Fullerton, G.D.; Ezekiel, J.
1982-09-01
Computerized tomography numbers (CTN) were evaluated in 32 computerized tomography scans performed on patients with carcinoma of the lung, with the aim of evaluating CTN in normal (lung, blood, muscle, etc) and pathologic tissues (tumor, atelectasis, effusion, post-radiation fibrosis). Our main findings are: 1. Large individual CTN variations are encountered in both normal and pathologic tissues, above and below mean values. Hence, absolute numbers are meaningless. Measurements of any abnormal intrathoracic structure should be compared in relation to normal tissue CTN values in the same scan. 2. Tumor and complete atelectasis have CTN basically similar to soft tissue. Hence, thesemore » numbers are not useful for differential diagnosis. 3. Effusions usually have lower CTN and can be distinguished from previous situations. 4. Dosimetry based on uniform lung density assumptions (i.e., 300 mg/cm/sup 3/) might produce substantial dose errors as lung CTN exhibit variations indicating densities well above and below this value. 5. Preliminary information indicates that partial atelectasis and incipient post-radiation fibrosis can have very low CTN. Hence, they can be differentiated from solid tumors in certain cases, and help in differential diagnosis of post radiation recurrence within the radiotherapy field versus fibrosis.« less
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
Valatas, Vassilis; Bamias, Giorgos; Kolios, George
2015-07-15
Inflammatory bowel diseases, ulcerative colitis and Crohn׳s disease are characterized by chronic relapsing inflammation of the gastrointestinal tract of unknown etiology that seems to be the consequence of a genetically driven dysregulated immune response against various local and environmental triggers through a defective epithelial barrier. During the last decades, a large number of animal experimental models of intestinal inflammation have been generated and provided valuable insights into the mechanisms that either maintain mucosal homeostasis or drive intestinal inflammation. Their study enabled the identification of various treatment targets and the development a large pipeline of new drugs, mostly biologics. Safety and therapeutic efficacy of these agents have been evaluated in a large number of clinical trials but only a minority has reached the clinic so far. Translational successes but mostly translational failures have prompted to re-evaluate results of efficacy and safety generated by pre-clinical testing and to re-examine the way to interpret experimental in vivo data. This review examines the contribution of the most popular experimental colitis models to our understanding of the pathogenesis of human inflammatory bowel diseases and their translational input in drug development and discusses ways to improve translational outcome. Copyright © 2015 Elsevier B.V. All rights reserved.
Yarkoni, Tal
2012-01-01
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (1) open and transparent access to accumulated evaluation data, (2) personalized and highly customizable performance metrics, and (3) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting toward such models as soon as possible. PMID:23060783
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro
We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.
Uemoto, Yoshinobu; Sasaki, Shinji; Kojima, Takatoshi; Sugimoto, Yoshikazu; Watanabe, Toshio
2015-11-19
Genetic variance that is not captured by single nucleotide polymorphisms (SNPs) is due to imperfect linkage disequilibrium (LD) between SNPs and quantitative trait loci (QTLs), and the extent of LD between SNPs and QTLs depends on different minor allele frequencies (MAF) between them. To evaluate the impact of MAF of QTLs on genomic evaluation, we performed a simulation study using real cattle genotype data. In total, 1368 Japanese Black cattle and 592,034 SNPs (Illumina BovineHD BeadChip) were used. We simulated phenotypes using real genotypes under different scenarios, varying the MAF categories, QTL heritability, number of QTLs, and distribution of QTL effect. After generating true breeding values and phenotypes, QTL heritability was estimated and the prediction accuracy of genomic estimated breeding value (GEBV) was assessed under different SNP densities, prediction models, and population size by a reference-test validation design. The extent of LD between SNPs and QTLs in this population was higher in the QTLs with high MAF than in those with low MAF. The effect of MAF of QTLs depended on the genetic architecture, evaluation strategy, and population size in genomic evaluation. In genetic architecture, genomic evaluation was affected by the MAF of QTLs combined with the QTL heritability and the distribution of QTL effect. The number of QTL was not affected on genomic evaluation if the number of QTL was more than 50. In the evaluation strategy, we showed that different SNP densities and prediction models affect the heritability estimation and genomic prediction and that this depends on the MAF of QTLs. In addition, accurate QTL heritability and GEBV were obtained using denser SNP information and the prediction model accounted for the SNPs with low and high MAFs. In population size, a large sample size is needed to increase the accuracy of GEBV. The MAF of QTL had an impact on heritability estimation and prediction accuracy. Most genetic variance can be captured using denser SNPs and the prediction model accounted for MAF, but a large sample size is needed to increase the accuracy of GEBV under all QTL MAF categories.
Wuelfing, W Peter; Daublain, Pierre; Kesisoglou, Filippos; Templeton, Allen; McGregor, Caroline
2015-04-06
In the drug discovery setting, the ability to rapidly identify drug absorption risk in preclinical species at high doses from easily measured physical properties is desired. This is due to the large number of molecules being evaluated and their high attrition rate, which make resource-intensive in vitro and in silico evaluation unattractive. High-dose in vivo data from rat, dog, and monkey are analyzed here, using a preclinical dose number (PDo) concept based on the dose number described by Amidon and other authors (Pharm. Res., 1993, 10, 264-270). PDo, as described in this article, is simply calculated as dose (mg/kg) divided by compound solubility in FaSSIF (mg/mL) and approximates the volume of biorelevant media per kilogram of animal that would be needed to fully dissolve the dose. High PDo values were found to be predictive of difficulty in achieving drug exposure (AUC)-dose proportionality in in vivo studies, as could be expected; however, this work analyzes a large data set (>900 data points) and provides quantitative guidance to identify drug absorption risk in preclinical species based on a single solubility measurement commonly carried out in drug discovery. Above the PDo values defined, >50% of all in vivo studies exhibited poor AUC-dose proportionality in rat, dog, and monkey, and these values can be utilized as general guidelines in discovery and early development to rapidly assess risk of solubility-limited absorption for a given compound. A preclinical dose number generated by biorelevant dilutions of formulated compounds (formulated PDo) was also evaluated and defines solubility targets predictive of suitable AUC-dose proportionality in formulation development efforts. Application of these guidelines can serve to efficiently identify compounds in discovery that are likely to present extreme challenges with respect to solubility-limited absorption in preclinical species as well as reduce the testing of poor formulations in vivo, which is a key ethical and resource matter.
An Evaluation of Systematic Tuberculosis Screening at Private Facilities in Karachi, Pakistan
Creswell, Jacob; Khowaja, Saira; Codlin, Andrew; Hashmi, Rabia; Rasheed, Erum; Khan, Mubashir; Durab, Irfan; Mergenthaler, Christina; Hussain, Owais; Khan, Faisal; Khan, Aamir J.
2014-01-01
Background In Pakistan, like many Asian countries, a large proportion of healthcare is provided through the private sector. We evaluated a systematic screening strategy to identify people with tuberculosis in private facilities in Karachi and assessed the approaches' ability to diagnose patients earlier in their disease progression. Methods and Findings Lay workers at 89 private clinics and a large hospital outpatient department screened all attendees for tuberculosis using a mobile phone-based questionnaire during one year. The number needed to screen to detect a case of tuberculosis was calculated. To evaluate early diagnosis, we tested for differences in cough duration and smear grading by screening facility. 529,447 people were screened, 1,010 smear-positive tuberculosis cases were detected and 942 (93.3%) started treatment, representing 58.7% of all smear-positive cases notified in the intervention area. The number needed to screen to detect a smear-positive case was 124 (prevalence 806/100,000) at the hospital and 763 (prevalence 131/100,000) at the clinics; however, ten times the number of individuals were screened in clinics. People with smear-positive TB detected at the hospital were less likely to report cough lasting 2–3 weeks (RR 0.66 95%CI [0.49–0.90]) and more likely to report cough duration >3 weeks (RR 1.10 95%CI [1.03–1.18]). Smear-positive cases at the clinics were less likely to have a +3 grade (RR 0.76 95%CI [0.63–0.92]) and more likely to have +1 smear grade (RR 1.24 95%CI [1.02–1.51]). Conclusions Tuberculosis screening at private facilities is acceptable and can yield large numbers of previously undiagnosed cases. Screening at general practitioner clinics may find cases earlier than at hospitals although more people must be screened to identify a case of tuberculosis. Limitations include lack of culture testing, therefore underestimating true TB prevalence. Using more sensitive and specific screening and diagnostic tests such as chest x-ray and Xpert MTB/RIF may improve results. PMID:24705600
An evaluation of systematic tuberculosis screening at private facilities in Karachi, Pakistan.
Creswell, Jacob; Khowaja, Saira; Codlin, Andrew; Hashmi, Rabia; Rasheed, Erum; Khan, Mubashir; Durab, Irfan; Mergenthaler, Christina; Hussain, Owais; Khan, Faisal; Khan, Aamir J
2014-01-01
In Pakistan, like many Asian countries, a large proportion of healthcare is provided through the private sector. We evaluated a systematic screening strategy to identify people with tuberculosis in private facilities in Karachi and assessed the approaches' ability to diagnose patients earlier in their disease progression. Lay workers at 89 private clinics and a large hospital outpatient department screened all attendees for tuberculosis using a mobile phone-based questionnaire during one year. The number needed to screen to detect a case of tuberculosis was calculated. To evaluate early diagnosis, we tested for differences in cough duration and smear grading by screening facility. 529,447 people were screened, 1,010 smear-positive tuberculosis cases were detected and 942 (93.3%) started treatment, representing 58.7% of all smear-positive cases notified in the intervention area. The number needed to screen to detect a smear-positive case was 124 (prevalence 806/100,000) at the hospital and 763 (prevalence 131/100,000) at the clinics; however, ten times the number of individuals were screened in clinics. People with smear-positive TB detected at the hospital were less likely to report cough lasting 2-3 weeks (RR 0.66 95%CI [0.49-0.90]) and more likely to report cough duration >3 weeks (RR 1.10 95%CI [1.03-1.18]). Smear-positive cases at the clinics were less likely to have a +3 grade (RR 0.76 95%CI [0.63-0.92]) and more likely to have +1 smear grade (RR 1.24 95%CI [1.02-1.51]). Tuberculosis screening at private facilities is acceptable and can yield large numbers of previously undiagnosed cases. Screening at general practitioner clinics may find cases earlier than at hospitals although more people must be screened to identify a case of tuberculosis. Limitations include lack of culture testing, therefore underestimating true TB prevalence. Using more sensitive and specific screening and diagnostic tests such as chest x-ray and Xpert MTB/RIF may improve results.
When Stakeholders Rebel: Lessons from a Safe Schools Program
ERIC Educational Resources Information Center
Gastic, Billie; Irby, Decoteau J.; Zdanis, Maureen
2008-01-01
In this essay, we describe our experiences working with a rebellious primary stakeholder, Sylvia, as evaluators of a district-wide safe schools program. Given the breadth of the program and its multiple target constituencies, we were confronted with the challenges of managing a large number of stakeholders, or those individuals and groups that…
The need to assess large numbers of chemicals for their potential toxicities has resulted in increased emphasis on medium- and high-throughput in vitro screening approaches. For such approaches to be useful, efficient and reliable data analysis and hit detection methods are also ...
Semi-Automatic Grading of Students' Answers Written in Free Text
ERIC Educational Resources Information Center
Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto
2011-01-01
The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional…
The Environmental Protection Agency has implemented a high throughput screening program, ToxCast, to quickly evaluate large numbers of chemicals for their effects on hundreds of different biological targets. To understand how these measurements relate to adverse effects in an or...
DOT National Transportation Integrated Search
2005-02-01
Accelerated load testing of paved and unpaved roads is the application of a large number of load repetitions in a short period of time. This type of testing is an economic way to determine the behavior of roads and compare different materials, struct...
Community Values as the Context for Interpreting Social Impacts.
ERIC Educational Resources Information Center
Canan, Penelope; Hennessy, Michael
A social impact assessment which focused on a Hawaiian community's evaluation of social change and development is reported. The research occurred on the island of Moloka'i, which depends largely on imports for its energy sources, although it has a number of natural sources (biomass, wind, solar, and water power). Specifically, the study identified…
ERIC Educational Resources Information Center
Colligan, Robert C.
Almost all preschool screening programs depend entirely on information and observations obtained during a brief evaluative session with the child. However, the logistics involved in managing large numbers of parents and children, the use of volunteers having varying degrees of sophistication or competency in assessment, the reliability and…
We are evaluating methods to screen/prioritize large numbers of chemicals using 6 day old zebrafish (Danio rerio) as an alternative model for detecting neurotoxic effects. Our behavioral testing paradigm simultaneously tests individual larval zebrafish under sequential light and...
Cottowood Breeding Strategies for the Future
D. T. Cooper
1976-01-01
A large number of genotypes of eastern cottonwood of diverse parentage should be evaluated followed by multiple-stage selection for the most important characters to obtain substantial gains per sexual cycle while retaining genetic diversity. More intensive testing should be practiced in selecting clones tor commercial use than for use as parents of the next generation...
Survival and growth of hardwood seedlings following preplanting-root treatments and treeshelters
Felix, Jr. Ponder
1997-01-01
The study evaluated the influence of root collar diameter, number of large lateral roots, preplanting-root treatments (biostimulant called Roots and a moisture loss retardant called supersorb) and tree shelters on 1-0 black walnut (Juglans nigra L.) and northern red oak (Quercus rubra L.) seedlings. Four years after outplanting,...
USDA-ARS?s Scientific Manuscript database
Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...
This work represents the technical and editorial contributions of a large number of U.S. Environmental Protection Agency (EPA) employees and others familiar with or interested in the demonstration and evaluation of innovative site characterization and monitoring technologies. In ...
Evaluating ToxCast™ High-Throughput Assays For Their Ability To Detect Direct-Acting Genotoxicants
A standard battery of tests has been in use for the several decades to screen chemicals for genotoxicity. However, the large number of environmental and industrial chemicals that need to be tested overwhelms our ability to test them. ToxCast™ is a multi-year effort to develop a ...
ERIC Educational Resources Information Center
Wei, Ruth Chung; Pecheone, Raymond L.; Wilczak, Katherine L.
2015-01-01
Since the passage of No Child Left Behind, large-scale assessments have come to play a central role in federal and state education accountability systems. Teachers and parents have expressed a number of concerns about their state testing programs, such as too much time devoted to testing and the high-stakes use of testing for teacher evaluation.…
An evaluation of FIA's stand age variable
John D. Shaw
2015-01-01
The Forest Inventory and Analysis Database (FIADB) includes a large number of measured and computed variables. The definitions of measured variables are usually well-documented in FIA field and database manuals. Some computed variables, such as live basal area of the condition, are equally straightforward. Other computed variables, such as individual tree volume,...
Lesion Analysis of the Brain Areas Involved in Language Comprehension
ERIC Educational Resources Information Center
Dronkers, Nina F.; Wilkins, David P.; Van Valin, Robert D., Jr.; Redfern, Brenda B.; Jaeger, Jeri J.
2004-01-01
The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which…
A Statistical Evaluation of the Effects of a Structured Postdoctoral Programme
ERIC Educational Resources Information Center
Bessudnov, Alexey; Guardiancich, Igor; Marimon, Ramon
2015-01-01
Postdoctoral programmes have recently become an important step leading from doctoral education to permanent academic careers in the social sciences. This paper investigates the effects of a large and structured postdoctoral programme in the social sciences on a number of academic and non-academic outcomes of fellows. Propensity score matching is…
Attracting Females to Science Careers: How Well Do Special Initiatives Work?
ERIC Educational Resources Information Center
Madill, Helen M.; Montgomerie, T. Craig; Armour, Margaret-Ann; Fitzsimmons, George W.; Stewin, Leonard L.; Tovell, Dorothy R.
Although there is considerable anecdotal evidence concerning the success of a large number of programs for women in science in Canada, no well-controlled studies had been conducted. This publication reports on results from an outcome evaluation of the Women in Scholarship, Engineering, Science and Technology (WISEST) Summer Research Program for…
Testing of transition-region models: Test cases and data
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Dinavahi, Surya; Iyer, Venkit
1991-01-01
Mean flow quantities in the laminar turbulent transition region and in the fully turbulent region are predicted with different models incorporated into a 3-D boundary layer code. The predicted quantities are compared with experimental data for a large number of different flows and the suitability of the models for each flow is evaluated.
USDA-ARS?s Scientific Manuscript database
Production and recycling of recombinant sweetener peptides in industrial biorefineries involves the evaluation of large numbers of genes and proteins. High-throughput integrated robotic molecular biology platforms that have the capacity to rapidly synthesize, clone, and express heterologous gene ope...
The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals using 6 day old zebrafish (Danio rerio) as an alternative test model for detecting neurotoxic chemicals. We use a behavioral testing paradigm that simultaneously tes...
ENVIRONMENTAL JUSTICE ASSESSMENT OF INDUSTRIAL PORK PRODUCTION IN MISSISSIPPI AND NORTH CAROLINA
There are large numbers of industrial swine operations that may pose a risk to the health and quality of life of these communities if the industry continues to grow at its current pace. This study intends to evaluate this situation as one of the major and emerging environment...
The Evaluation Design: Summer Courses, 1974. Technical Report Number Four.
ERIC Educational Resources Information Center
Bramble, William J., Ed.; Ausness, Claudine, Ed.
The Appalachian Education Satellite Project (AESP) was conceptualized in 1973 (1) to develop courses in reading and career-education instruction for teachers in the Appalachian region, and (2) to determine the feasibility of conducting such courses over a large geographical area via communication satellites. During the summer of 1974 nearly 600…
ERIC Educational Resources Information Center
Bramble, William J., Ed.; Ausness, Claudine, Ed.
The Appalachian Education Satellite Project (AESP) was conceptualized in 1973 (1) to develop courses in reading and career-education instruction for teachers in the Appalachian region, and (2) to determine the feasibility of conducting such courses over a large geographical area via communications satellites. This report describes the formative…
Comprehensive benefit evaluation of direct power-purchase for large consumers
NASA Astrophysics Data System (ADS)
Liu, D. N.; Li, Z. H.; Zhou, H. M.; Zhao, Q.; Xu, X. F.
2017-06-01
Based on "several opinions of the CPC Central Committee and the State Council on further deepening the reform of electric power system" in 2015, this paper analyses the influence of direct power-purchase for large consumers on operation benefit of power grid. In three aspects, such as economic benefit, cleaning benefit and social benefit, the index system is proposed. In which, the profit of saving coal energy consumption, reducing carbon emissions and reducing pollutant emissions is quantitative calculated. Then the subjective and objective weights and index scores are figured out through the analytic hierarchy process, entropy weight method and interval number method. Finally, the comprehensive benefit is evaluated combined with the actual study, and some suggestions are made.
Coordinating Multi-Rover Systems: Evaluation Functions for Dynamic and Noisy Environments
NASA Technical Reports Server (NTRS)
Turner, Kagan; Agogino, Adrian
2005-01-01
This paper addresses the evolution of control strategies for a collective: a set of entities that collectively strives to maximize a global evaluation function that rates the performance of the full system. Directly addressing such problems by having a population of collectives and applying the evolutionary algorithm to that population is appealing, but the search space is prohibitively large in most cases. Instead, we focus on evolving control policies for each member of the collective. The fundamental issue in this approach is how to create an evaluation function for each member of the collective that is both aligned with the global evaluation function and is sensitive to the fitness changes of the member, while relatively insensitive to the fitness changes of other members. We show how to construct evaluation functions in dynamic, noisy and communication-limited collective environments. On a rover coordination problem, a control policy evolved using aligned and member-sensitive evaluations outperfoms global evaluation methods by up to 400%. More notably, in the presence of a larger number of rovers or rovers with noisy and communication limited sensors, the proposed method outperforms global evaluation by a higher percentage than in noise-free conditions with a small number of rovers.
Sheehy, Siobhan; Cohen, Georgia; Owen, Katharine R
2014-01-01
Treatment compliance and adherence are often a challenge in patients with type 1 diabetes, particularly for adolescent and young adult patients. With the availability of the internet and smart phone applications (apps) there is a hope that such technology could provide a means to encourage treatment adherence in this group of patients. This review focuses on whether telemedicine and smartphone technology in diabetes can influence self-management in young people with diabetes. A large number of smartphone apps are targeted at people with diabetes, but a limited number of well designed evaluation studies have been performed. As our review shows, the evidence base for efficacy of most of these applications is minimal and improvement in hard outcomes such as HbA1c and complication development is largely lacking.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Quality of life assessment in interventional radiology.
Monsky, Wayne L; Khorsand, Derek; Nolan, Timothy; Douglas, David; Khanna, Pavan
2014-03-01
The aim of this review was to describe quality of life (QoL) questionnaires relevant to interventional radiology. Interventional radiologists perform a large number of palliative procedures. The effect of these therapies on QoL is important. This is particularly true for cancer therapies where procedures with marginal survival benefits may result in tremendous QoL benefits. Image-guided minimally invasive procedures should be compared to invasive procedures, with respect to QoL, as part of comparative effectiveness assessment. A large number of questionnaires have been validated for measurement of overall and disease-specific quality of life. Use of applicable QoL assessments can aid in evaluating clinical outcomes and help to further substantiate the need for minimally invasive image-guided procedures. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; ...
2018-03-15
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Combining large number of weak biomarkers based on AUC
Yan, Li; Tian, Lili; Liu, Song
2018-01-01
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
NASA Technical Reports Server (NTRS)
Kempel, Robert W.; Mcneill, Walter E.; Gilyard, Glenn B.; Maine, Trindel A.
1988-01-01
The NASA Ames Research Center developed an oblique-wing research plane from NASA's digital fly-by-wire airplane. Oblique-wing airplanes show large cross-coupling in control and dynamic behavior which is not present on conventional symmetric airplanes and must be compensated for to obtain acceptable handling qualities. The large vertical motion simulator at NASA Ames-Moffett was used in the piloted evaluation of a proposed flight control system designed to provide decoupled handling qualities. Five discrete flight conditions were evaluated ranging from low altitude subsonic Mach numbers to moderate altitude supersonic Mach numbers. The flight control system was effective in generally decoupling the airplane. However, all participating pilots objected to the high levels of lateral acceleration encountered in pitch maneuvers. In addition, the pilots were more critical of left turns (in the direction of the trailing wingtip when skewed) than they were of right turns due to the tendency to be rolled into the left turns and out of the right turns. Asymmetric side force as a function of angle of attack was the primary cause of lateral acceleration in pitch. Along with the lateral acceleration in pitch, variation of rolling and yawing moments as functions of angle of attack caused the tendency to roll into left turns and out of right turns.
NASA Technical Reports Server (NTRS)
Larson, T. J.
1984-01-01
The measurement performance of a hemispherical flow-angularity probe and a fuselage-mounted pitot-static probe was evaluated at high flow angles as part of a test program on an F-14 airplane. These evaluations were performed using a calibrated pitot-static noseboom equipped with vanes for reference flow direction measurements, and another probe incorporating vanes but mounted on a pod under the fuselage nose. Data are presented for angles of attack up to 63, angles of sideslip from -22 deg to 22 deg, and for Mach numbers from approximately 0.3 to 1.3. During maneuvering flight, the hemispherical flow-angularity probe exhibited flow angle errors that exceeded 2 deg. Pressure measurements with the pitot-static probe resulted in very inaccurate data above a Mach number of 0.87 and exhibited large sensitivities with flow angle.
Re-evaluating the northeastern Minnesota moose decline and the role of wolves
Mech, L. David; Fieberg, John
2014-01-01
We re-evaluated findings from Lenarz et al. (2009) that adult moose (Alces alces) survival in northeastern Minnesota was related to high January temperatures and that predation by wolves (Canis lupus) played a minor role. We found significant inverse relationships between annual wolf numbers in part of the moose range and various moose demographics from 2003 to 2013 that suggested a stronger role of wolves than heretofore believed. To re-evaluate the temperature findings, we conducted a simulation study, mimicking the approach taken by Lenarz et al. (2009), to explore the potential for concluding a significant relationship exists between temperature and survival, when no association exists. We found that the high R2s and low probabilities associated with the regression models in Lenarz et al. (2009) should be viewed cautiously in light of the large number of fitted models (m = 45) and few observations (n = 6 for each of 5 response variables).
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Evaluation of Flush-Mounted, S-Duct Inlets with Large Amounts of Boundary Layer Ingestion
NASA Technical Reports Server (NTRS)
Berrier, Bobby L.; Morehouse, Melissa B.
2003-01-01
A new high Reynolds number test capability for boundary layer ingesting inlets has been developed for the NASA Langley Research Center 0.3-Meter Transonic Cryogenic Tunnel. Using this new capability, an experimental investigation of four S-duct inlet configurations with large amounts of boundary layer ingestion (nominal boundary layer thickness of about 40% of inlet height) was conducted at realistic operating conditions (high subsonic Mach numbers and full-scale Reynolds numbers). The objectives of this investigation were to 1) provide a database for CFD tool validation on boundary layer ingesting inlets operating at realistic conditions and 2) provide a baseline inlet for future inlet flow-control studies. Tests were conducted at Mach numbers from 0.25 to 0.83, Reynolds numbers (based on duct exit diameter) from 5.1 million to a full-scale value of 13.9 million, and inlet mass-flow ratios from 0.39 to 1.58 depending on Mach number. Results of this investigation indicate that inlet pressure recovery generally decreased and inlet distortion generally increased with increasing Mach number. Except at low Mach numbers, increasing inlet mass-flow increased pressure recovery and increased distortion. Increasing the amount of boundary layer ingestion (by decreasing inlet throat height) or ingesting a boundary layer with a distorted (adverse) profile decreased pressure recovery and increased distortion. Finally, increasing Reynolds number had almost no effect on inlet distortion but increased inlet recovery by about one-half percent at a Mach number near cruise.
Ball, Robert; Horne, Dale; Izurieta, Hector; Sutherland, Andrea; Walderhaug, Mark; Hsu, Henry
2011-05-01
The public health community faces increasing demands for improving vaccine safety while simultaneously increasing the number of vaccines available to prevent infectious diseases. The passage of the US Food and Drug Administration (FDA) Amendment Act of 2007 formalized the concept of life-cycle management of the risks and benefits of vaccines, from early clinical development through many years of use in large numbers of people. Harnessing scientific and technologic advances is necessary to improve vaccine-safety evaluation. The Office of Biostatistics and Epidemiology in the Center for Biologics Evaluation and Research is working to improve the FDA's ability to monitor vaccine safety by improving statistical, epidemiologic, and risk-assessment methods, gaining access to new sources of data, and exploring the use of genomics data. In this article we describe the current approaches, new resources, and future directions that the FDA is taking to improve the evaluation of vaccine safety.
1976-12-01
considerably. A milk cow , for example, was previously evaluated chiefly according to its milk yield and the content of fat in its milk. Now to this have...been added qualitative evaluations such as its suitability for mechanical milking, the albumin content in its milk, its resistance to mastitis , and...head of cows and 30 to 50 sows. Now on large farms there are a thousand and more cows and pigbreeding complexes number several hundreds of sows
2016-06-28
Likewise, we have developed a new general theory of relevance that quanti - fies how new data observations may or not affect an observer’s beliefs about how...which suggests that relevance is not an inherent attribute but rather is dependent on the knowledge or beliefs of the subject evaluating the...subjects. This allowed us to evaluate the accuracy of each person as the number of image pairs for which they selected the majority image. The average
Material Inspection Using THz and Thermal Wave
NASA Astrophysics Data System (ADS)
Zhang, Cunlin; Mu, Kaijun; Li, Yanhong; Zhang, X.-C.
2007-03-01
Terahertz (THz) and thermal wave imaging technologies are complementary inspection modalities for use in non-contact and non-destructive evaluation. Both of them are applied in order to evaluate damages on a variety of composite samples. We will also report the test of a large number of insulation foam panels used in NASA's External Fuel Tank through pulse and CW terahertz systems. The study of defects using the two techniques in selected materials, including metal plates, carbon fibers, glass fibers, carbon silicon composites, etc is also shown.
Geometrical optimization of sensors for eddy currents nondestructive testing and evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thollon, F.; Burais, N.
1995-05-01
Design of Non Destructive Testing (NDT) and Non Destructive Evaluation (NDE) sensors is possible by solving Maxwell`s relations with FEM or BIM. But the large number of geometrical and electrical parameters of sensor and tested material implies many results that don`t give necessarily a well adapted sensor. The authors have used a genetic algorithm for automatic optimization. After having tested this algorithm with analytical solution of Maxwell`s relations for cladding thickness measurement, the method has been implemented in finite element package.
Conflict Misleads Large Carnivore Management and Conservation: Brown Bears and Wolves in Spain.
Fernández-Gil, Alberto; Naves, Javier; Ordiz, Andrés; Quevedo, Mario; Revilla, Eloy; Delibes, Miguel
2016-01-01
Large carnivores inhabiting human-dominated landscapes often interact with people and their properties, leading to conflict scenarios that can mislead carnivore management and, ultimately, jeopardize conservation. In northwest Spain, brown bears Ursus arctos are strictly protected, whereas sympatric wolves Canis lupus are subject to lethal control. We explored ecological, economic and societal components of conflict scenarios involving large carnivores and damages to human properties. We analyzed the relation between complaints of depredations by bears and wolves on beehives and livestock, respectively, and bear and wolf abundance, livestock heads, number of culled wolves, amount of paid compensations, and media coverage. We also evaluated the efficiency of wolf culling to reduce depredations on livestock. Bear damages to beehives correlated positively to the number of female bears with cubs of the year. Complaints of wolf predation on livestock were unrelated to livestock numbers; instead, they correlated positively to the number of wild ungulates harvested during the previous season, the number of wolf packs, and to wolves culled during the previous season. Compensations for wolf complaints were fivefold higher than for bears, but media coverage of wolf damages was thirtyfold higher. Media coverage of wolf damages was unrelated to the actual costs of wolf damages, but the amount of news correlated positively to wolf culling. However, wolf culling was followed by an increase in compensated damages. Our results show that culling of the wolf population failed in its goal of reducing damages, and suggest that management decisions are at least partly mediated by press coverage. We suggest that our results provide insight to similar scenarios, where several species of large carnivores share the landscape with humans, and management may be reactive to perceived conflicts.
Conflict Misleads Large Carnivore Management and Conservation: Brown Bears and Wolves in Spain
Fernández-Gil, Alberto; Naves, Javier; Ordiz, Andrés; Quevedo, Mario; Revilla, Eloy; Delibes, Miguel
2016-01-01
Large carnivores inhabiting human-dominated landscapes often interact with people and their properties, leading to conflict scenarios that can mislead carnivore management and, ultimately, jeopardize conservation. In northwest Spain, brown bears Ursus arctos are strictly protected, whereas sympatric wolves Canis lupus are subject to lethal control. We explored ecological, economic and societal components of conflict scenarios involving large carnivores and damages to human properties. We analyzed the relation between complaints of depredations by bears and wolves on beehives and livestock, respectively, and bear and wolf abundance, livestock heads, number of culled wolves, amount of paid compensations, and media coverage. We also evaluated the efficiency of wolf culling to reduce depredations on livestock. Bear damages to beehives correlated positively to the number of female bears with cubs of the year. Complaints of wolf predation on livestock were unrelated to livestock numbers; instead, they correlated positively to the number of wild ungulates harvested during the previous season, the number of wolf packs, and to wolves culled during the previous season. Compensations for wolf complaints were fivefold higher than for bears, but media coverage of wolf damages was thirtyfold higher. Media coverage of wolf damages was unrelated to the actual costs of wolf damages, but the amount of news correlated positively to wolf culling. However, wolf culling was followed by an increase in compensated damages. Our results show that culling of the wolf population failed in its goal of reducing damages, and suggest that management decisions are at least partly mediated by press coverage. We suggest that our results provide insight to similar scenarios, where several species of large carnivores share the landscape with humans, and management may be reactive to perceived conflicts. PMID:26974962
Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.
Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H
2008-08-01
Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.
Large scale tracking algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less
The high Reynolds number flow through an axial-flow pump
NASA Astrophysics Data System (ADS)
Zierke, W. C.; Straka, W. A.; Taylor, P. D.
1993-11-01
The high Reynolds number pump (HIREP) facility at ARL Penn State has been used to perform a low-speed, large-scale experiment of the incompressible flow of water through a two-blade-row turbomachine. HIREP can involve blade chord Reynolds numbers as high as 6,000,000 and can accommodate a variety of instrumentation in both a stationary and a rotating frame of reference. The objectives of this experiment were as follows: to provide a database for comparison with three-dimensional, viscous (turbulent) flow computations; to evaluate the engineering models; and to improve our physical understanding of many of the phenomena involved in this complex flow field. The experimental results include a large quantity of data acquired throughout HIREP. A five-hole probe survey of the inlet flow 37.0 percent chord upstream of the inlet guide vane (IGV) leading edge is sufficient to give information for the inflow boundary conditions, while some static-pressure information is available to help establish an outflow boundary condition.
Improving Children’s Knowledge of Fraction Magnitudes
Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.
2016-01-01
We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards’ suggestions for teaching fractions, would improve children’s fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards. PMID:27768756
Grazier, Kyle L; Eisenberg, Daniel; Jedele, Jenefer M; Smiley, Mary L
2016-04-01
This study evaluated utilization of mental health and substance use services among enrollees at a large employee health plan following changes to benefit limits after passage in 2008 of federal mental health parity legislation. This study used a pre-post design. Benefits and claims data for 43,855 enrollees in the health plan in 2009 and 2010 were analyzed for utilization and costs after removal of a 30-visit cap on the number of covered mental health visits. There was a large increase in the proportion of health plan enrollees with more than 30 outpatient visits after the cap's removal, an increase of 255% among subscribers and 176% among dependents (p<.001). The number of people near the 30-visit limit for substance use disorders was too few to observe an effect. Federal mental health parity legislation is likely to increase utilization of mental health services by individuals who had previously met their benefit limit.
Turbulent boundary-layer velocity profiles on a nonadiabatic at Mach number 6.5
NASA Technical Reports Server (NTRS)
Keener, E. R.; Hopkins, E. J.
1972-01-01
Velocity profiles were obtained from pitot-pressure and total-temperature measurements within a turbulent boundary layer on a large sharp-edged flat plate. Momentum-thickness Reynolds number ranged from 2590 to 8860 and wall-to-adiabatic-wall temperature ratios ranged from 0.3 to 0.5. Measurements were made both with and without boundary layer trips. Five methods are evaluated for correlating the measured velocity profiles with the incompressible law-of-the-wall and the velocity defect law. The mixing-length generalization of Van Driest gives the best correlation.
Recent progress in 3-D imaging of sea freight containers
NASA Astrophysics Data System (ADS)
Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf
2015-03-01
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.
da Silva, Isabel C M; Bremm, Bárbara; Teixeira, Jennifer L; Costa, Nathalia S; Barcellos, Júlio O J; Braccini, José; Cesconeto, Robson J; McManus, Concepta
2017-06-01
Brazilian pig production spans over a large territory encompassing regions of different climatic and socio-economic realities. Production, physical, socio-economic, and environmental data were used to characterize pig production in the country. Multivariate analysis evaluated indices including number productivity, production levels, and income from pigs, together with the average area of pig farm and socio-economic variables such as municipal human development index, technical guidance received from agricultural cooperatives and industrial companies, number of family farms, and offtake; and finally, environmental variables: latitude, longitude, annual temperature range, solar radiation index, as well as temperature and humidity index. The Southern region has the largest herd, number of pigs sold/sow, and offtake rate (p < 0.05), followed by the Midwest and Southeast. No significant correlations were seen between production rates and productivity with the socio-economic and environmental variables in the regions of Brazil. Production indexes, productivity, and offtake rate discriminated Northeast and Midwest and Northeast and Southeast regions. The Northern region, with a large area, has few and far-between farms that rear pigs for subsistence. The Northeast region has large herds, but low productivity. Number of slaughtered pigs has been variable over the past three decades, with few states responsible for maintaining high production in Brazil. However, the activity can be effective in any region of the country with technology and technical assistance adapted to regional characteristics.
Dealing with Big Numbers: Representation and Understanding of Magnitudes outside of Human Experience
ERIC Educational Resources Information Center
Resnick, Ilyse; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Being able to estimate quantity is important in everyday life and for success in the STEM disciplines. However, people have difficulty reasoning about magnitudes outside of human perception (e.g., nanoseconds, geologic time). This study examines patterns of estimation errors across temporal and spatial magnitudes at large scales. We evaluated the…
Brief Introductory Psychology Textbooks: An Objective Analysis Update
ERIC Educational Resources Information Center
Griggs, Richard A.; Jackson, Sherri L.
2013-01-01
It has been 12 years since the last objective analysis of brief introductory psychology textbooks was published and 13 years since the textbook copyright period used in that study, 1997-2000. Given the importance of informed textbook evaluation and selection to the introductory course but the difficulty of this task because of the large number of…
The Subschools/Small Schools Movement--Taking Stock.
ERIC Educational Resources Information Center
Raywid, Mary Anne
Today, the division of large schools into subschools or subunits is often recommended as the answer to a number of problems in education. This paper examines the several forms of school-downsizing efforts and the somewhat diverse purposes for which they are being established. The data come from a review of literature and an evaluation of 22…
The Use and Validation of Qualitative Methods Used in Program Evaluation.
ERIC Educational Resources Information Center
Plucker, Frank E.
When conducting a two-year college program review, there are several advantages to supplementing the standard quantitative research approach with qualitative measures. Qualitative research does not depend on a large number of random samples, it uses a flexible design which can be refined as the research is executed, and it generates findings in a…
Jack E. Coster; Janet L. Searcy
1979-01-01
A large number of state and Federal experiment stations, universities and Federal, state and private resource management organizations have participated in the USDA Expanded Southern Pine Beetle Research and Applications Program (ESPBRAP) since its inception in 1975. The objectives of this accelerated effort have been to utilize available knowledge more fully...
Regulatory agencies are confronted with a daunting task of developing fish consumption advisories for a large number of lakes and rivers with little resources. A feasible mechanism to develop region-wide fish advisories is by using a process-based mathematical model. One model of...
A design aid for determining width of filter strips
M.G. Dosskey; M.J. Helmers; D.E. Eisenhauer
2008-01-01
watershed planners need a tool for determining width of filter strips that is accurate enough for developing cost-effective site designs and easy enough to use for making quick determinations on a large number and variety of sites.This study employed the process-based Vegetative Filter Strip Model to evaluate the relationship between filter strip width and trapping...
Status and progress in large-scale assessment of biological diversity in the United States
S. R. Shifley; C. H. Flather; W. B. Smith; K. H. Riitters; C. H. Sieg
2010-01-01
Conservation of biological diversity is one of seven criteria used to evaluate forest sustainability in the United States. The status of biological diversity is characterized by nine indicators that report area, protected status, and fragmentation of forest habitats; number and conservation status of forest-associated species; range and abundance of forest species to...
Advances in air pollution sensor technology have enabled the development of small and low cost systems to measure outdoor air pollution. The deployment of a large number of sensors across a small geographic area would have potential benefits to supplement traditional monitoring n...
Evaluating Comprehensive School Reform Models at Scale: Focus on Implementation
ERIC Educational Resources Information Center
Vernez, Georges; Karam, Rita; Mariano, Louis T.; DeMartini, Christine
2006-01-01
This study was designed to fill the "implementation measurement" gap. A methodology to quantitatively measure the level of Comprehensive School Reform (CSR) implementation that can be used across a variety of CSR models was developed, and then applied to measure actual implementation of four different CSR models in a large number of schools. The…
Thermostatic Radiator Valve Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dentz, Jordan; Ansanelli, Eric
2015-01-01
A large stock of multifamily buildings in the Northeast and Midwest are heated by steam distribution systems. Losses from these systems are typically high and a significant number of apartments are overheated much of the time. Thermostatically controlled radiator valves (TRVs) are one potential strategy to combat this problem, but have not been widely accepted by the residential retrofit market.
USDA-ARS?s Scientific Manuscript database
Field-based, high-throughput phenotyping (FB-HTP) methods are becoming more prevalent in plant genetics and breeding because they enable the evaluation of large numbers of genotypes under actual field conditions. Many systems for FB-HTP quantify and characterize the reflected radiation from the crop...
A Process for Reviewing and Evaluating Generated Test Items
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis
2016-01-01
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
USDA-ARS?s Scientific Manuscript database
The genus Bactrocera (Tephritidae) contains over 500 species, including many severe pests of fruits and vegetables. While native to tropical and sub-tropical areas of Africa, India, Southeast Asia and Australasia, a number of the pest species, largely members of the Bactrocera dorsalis complex, have...
ERIC Educational Resources Information Center
Jimison, Donna L.
2013-01-01
In a large community college in the Midwest, an online medical terminology course was experiencing success rates below that of college- and state-wide levels. This study evaluated the outcomes of intentional, increased numbers of e-mail communications between under-performing students and faculty for the purpose of improving student academic…
NASA Astrophysics Data System (ADS)
Scotter, Susan L.; Wood, Roger; McWeeny, David J.
A study to evaluate the potential of the Limulus amoebocyte lysate (LAL) test in conjuction with a Gram negative bacteria (GNB) plate count for detecting the irradiation of chicken is described. Preliminary studies demonstrated that chickens irradiated at an absorbed dose of 2.5 kGy could be differentiated from unirradiated birds by measuring levels of endotoxin and of numbers of GNB on chicken skin. Irradiated birds were found to have endotoxin levels similar to those found in unirradiated birds but significantly lower numbers of GNB. In a limited study the test was found to be applicable to birds from different processors. The effect of temperature abuse on the microbiological profile, and thus the efficacy of the test, was also investigated. After temperature abuse, the irradiated birds were identifiable at worst up to 3 days after irradiation treatment at the 2.5 kGy level and at best some 13 days after irradiation. Temperature abuse at 15°C resulted in rapid recovery of surviving micro-organisms which made differentiation of irradiated and unirradiated birds using this test unreliable. The microbiological quality of the bird prior to irradiation treatment also affected the test as large numbers of GNB present on the bird prior to irradiation treatment resulted in larger numbers of survivors. In addition, monitoring the developing flora after irradiation treatment and during subsequent chilled storage also aided differentiation of irradiated and unirradiated birds. Large numbers of yeasts and Gram positive cocci were isolated from irradiated carcasses whereas Gram negative oxidative rods were the predominant spoilage flora on unirradiated birds.
Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise
2015-01-01
Objectives We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. Methods DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Results Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Conclusions Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. PMID:24644163
Fu, Yao; Shao, Chunyi; Lu, Wenjuan; Li, Jin; Fan, Xianqun
2016-08-01
The aim of this study was to evaluate the long-term outcome when a free tarsomarginal graft is used to repair a large congenital coloboma in patients with a Tessier number 10 cleft. This was a retrospective, interventional case series. The medical records were reviewed for five children (six eyes) diagnosed as having Tessier number 10 cleft with large upper eyelid defects and symblepharon. These children were referred to the Department of Ophthalmology of Shanghai Ninth People's Hospital, between May 2007 and December 2012. Reconstructive techniques included repair of the upper eyelid defect with a free tarsomarginal graft taken from the lower eyelid, and reconstruction of the conjunctival fornix by using a conjunctival autograft after symblepharon lysis. All the children were followed up for more than 2 years. Postoperative upper eyelid contour, viability and function for corneal protection, and recurrence of symblepharon were assessed. A one-stage reconstruction procedure was used in all children. All reconstructed eyelids achieved a surgical goal of providing corneal protection and improved cosmesis, with marked improvement of exposure keratopathy and no associated lagophthalmos. Adequate reconstruction of the upper fornix was obtained, and there was no obvious recurrence of symblepharon. A free tarsomarginal graft is beneficial and seems to be an adequate method for reconstruction of large eyelid defects in children with a Tessier number 10 cleft. Symblepharon lysis with a conjunctival autograft for reconstruction of the ocular surface can be performed at the same time as eyelid repair as a one-stage procedure. Copyright © 2016. Published by Elsevier Ltd.
New similarity of triangular fuzzy number and its application.
Zhang, Xixiang; Ma, Weimin; Chen, Liping
2014-01-01
The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.
Evaluation of Computational Method of High Reynolds Number Slurry Flow for Caverns Backfilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettin, Giorgia
2015-05-01
The abandonment of salt caverns used for brining or product storage poses a significant environmental and economic risk. Risk mitigation can in part be address ed by the process of backfilling which can improve the cavern geomechanical stability and reduce the risk o f fluid loss to the environment. This study evaluate s a currently available computational tool , Barracuda, to simulate such process es as slurry flow at high Reynolds number with high particle loading . Using Barracuda software, a parametric sequence of simu lations evaluated slurry flow at Re ynolds number up to 15000 and loading up tomore » 25%. Li mitations come into the long time required to run these simulation s due in particular to the mesh size requirement at the jet nozzle. This study has found that slurry - jet width and centerline velocities are functions of Re ynold s number and volume fractio n The solid phase was found to spread less than the water - phase with a spreading rate smaller than 1 , dependent on the volume fraction. Particle size distribution does seem to have a large influence on the jet flow development. This study constitutes a first step to understand the behavior of highly loaded slurries and their ultimate application to cavern backfilling.« less
Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Gu, L.
2016-12-01
Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.
The power of a single trajectory
NASA Astrophysics Data System (ADS)
Schnellbächer, Nikolas D.; Schwarz, Ulrich S.
2018-03-01
Random walks are often evaluated in terms of their mean squared displacements, either for a large number of trajectories or for one very long trajectory. An alternative evaluation is based on the power spectral density, but here it is less clear which information can be extracted from a single trajectory. For continuous-time Brownian motion, Krapf et al now have mathematically proven that the one property that can be reliably extracted from a single trajectory is the frequency dependence of the ensemble-averaged power spectral density (Krapf et al 2018 New J. Phys. 20 023029). Their mathematical analysis also identifies the appropriate frequency window for this procedure and shows that the diffusion coefficient can be extracted by averaging over a small number of trajectories. The authors have verified their analytical results both by computer simulations and experiments.
EFT of large scale structures in redshift space
NASA Astrophysics Data System (ADS)
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun
2018-03-01
We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.
Evaluating IPv6 Adoption in the Internet
NASA Astrophysics Data System (ADS)
Colitti, Lorenzo; Gunderson, Steinar H.; Kline, Erik; Refice, Tiziana
As IPv4 address space approaches exhaustion, large networks are deploying IPv6 or preparing for deployment. However, there is little data available about the quantity and quality of IPv6 connectivity. We describe a methodology to measure IPv6 adoption from the perspective of a Web site operator and to evaluate the impact that adding IPv6 to a Web site will have on its users. We apply our methodology to the Google Web site and present results collected over the last year. Our data show that IPv6 adoption, while growing significantly, is still low, varies considerably by country, and is heavily influenced by a small number of large deployments. We find that native IPv6 latency is comparable to IPv4 and provide statistics on IPv6 transition mechanisms used.
GW Calculations of Materials on the Intel Xeon-Phi Architecture
NASA Astrophysics Data System (ADS)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.
Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).
Goonesekere, Nalin Cw
2009-01-01
The large numbers of protein sequences generated by whole genome sequencing projects require rapid and accurate methods of annotation. The detection of homology through computational sequence analysis is a powerful tool in determining the complex evolutionary and functional relationships that exist between proteins. Homology search algorithms employ amino acid substitution matrices to detect similarity between proteins sequences. The substitution matrices in common use today are constructed using sequences aligned without reference to protein structure. Here we present amino acid substitution matrices constructed from the alignment of a large number of protein domain structures from the structural classification of proteins (SCOP) database. We show that when incorporated into the homology search algorithms BLAST and PSI-blast, the structure-based substitution matrices enhance the efficacy of detecting remote homologs.
Evaluation of a neuropsychological screen in an incarcerated population.
Ball, Tabitha D; Pastore, Richard E; Sollman, Miriam J; Burright, Richard G; Donovick, Peter J
2009-08-01
The Brief Neuropsychological Cognitive Examination (BNCE) is a screening device designed to rapidly assess neuropsychological functioning. The availability of an effective and efficient screen for neuropsychological and/or cognitive disorders is an important concern within various settings such as correctional facilities, where there are likely to be large numbers of individuals in need of evaluation. In the current study the utility of the BNCE in detecting cognitive impairments among a clinical sample of incarcerated individuals was evaluated by comparing performance on this instrument to performance on measures of general cognitive functioning. Results indicate that the BNCE demonstrates some utility in its ability to determine those in need of further evaluation of cognitive functioning.
Twelve-hour shift on ITU: a nursing evaluation.
Richardson, Annette; Dabner, Nichola; Curtis, Sarah
2003-01-01
This paper describes the introduction and subsequent evaluation of a 12-h shift system in a large ITU in the northeast of UK. To date, only a small number of studies has evaluated nurses working the 12-h shifts in critical care areas. To evaluate the level of staff satisfaction, data were collected by means of a questionnaire involving 41 nurses, at 3 months following the introduction of the 12-h shifts. The responses from the evaluation advocated the continuation of 12-h shifts with alternative shift patterns for nurses who felt dissatisfied with the current system. Twelve-hour shifts can be seen as a flexible system for nurses working in intensive care and may assist with staff satisfaction and improving nurse recruitment and retention.
Pritychenko, B.; Birch, M.; Singh, B.; ...
2015-11-03
A complete B(E2)↑ evaluation and compilation for even-even nuclei has been presented. The present paper is a continuation of P.H. Stelson and L. Grodzins, and S. Raman et al. nuclear data evaluations and was motivated by a large number of new measurements. It extends the list of evaluated nuclides from 328 to 452, includes an extended list of nuclear reaction kinematics parameters and comprehensive shell model analysis. Evaluation policies for analysis of experimental data have been discussed and conclusions are given. Moreover, future plans for B(E2)↑ systematics and experimental technique analyses of even-even nuclei are outlined.
NASA Astrophysics Data System (ADS)
Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia
2015-03-01
The rapid evolution in medical imaging has led to an increased number of recurrent trials, primarily to ensure that the efficacy of new imaging techniques is known. The cost associated with time and resources in conducting such trials is usually high. The recruitment of participants, in a medium to large reader study, is often very challenging as the demanding number of cases discourages involvement with the trial. We aim to evaluate the efficacy of Digital Breast Tomosynthesis (DBT) in a recall assessment clinic in Australia in a prospective multi-reader-multi-case (MRMC) trial. Conducting such a study with the more commonly used fully crossed MRMC study design would require more cases and more cases read per reader, which was not viable in our setting. With an aim to perform a cost effective yet statistically efficient clinical trial, we evaluated alternative study designs, particularly the alternative split-plot MRMC study design and compared and contrasted it with more commonly used fully crossed MRMC study design. Our results suggest that `split-plot', an alternative MRMC study design, could be very beneficial for medium to large clinical trials and the cost associated with conducting such trials can be greatly reduced without adversely effecting the variance of the study. We have also noted an inverse dependency between number of required readers and cases to achieve a target variance. This suggests that split-plot could also be very beneficial for studies that focus on cases that are hard to procure or readers that are hard to recruit. We believe that our results may be relevant to other researchers seeking to design a medium to large clinical trials.
Takei, Yuichiro; Katsuta, Hiroki; Takizawa, Kenichi; Ikegami, Tetsushi; Hamaguchi, Kiyoshi
2012-01-01
This paper presents an experimental evaluation of communication during human walking motion, using the medium access control (MAC) evaluation system for a prototype ultra-wideband (UWB) based wireless body area network for suitable MAC parameter settings for data transmission. Its physical layer and MAC specifications are based on the draft standard in IEEE802.15.6. This paper studies the effects of the number of retransmissions and the number of commands of GTS (guaranteed time slot) request packets in the CAP (contention access period) during human walking motion by varying the number of sensor nodes or the number of CFP (contention free period) slots in the superframe. The experiments were performed in an anechoic chamber. The number of packets received is decreased by packet loss caused by human walking motion in the case where 2 slots are set for CFP, regardless of the number of nodes, and this materially decreases the total number of packets received. The number of retransmissions and the GTS request commands increase according to increases in the number of nodes, largely reflecting the effects of the number of CFP slots in the case where 4 nodes are attached. In the cases where 2 or 3 nodes are attached and 4 slots are set for CFP, the packet transmission rate is more than 95%. In the case where 4 nodes are attached and 6 slots are set for CFP, the packet transmission rate is reduced to 88% at best.
Speckle interferometry with temporal phase evaluation for measuring large-object deformation.
Joenathan, C; Franze, B; Haible, P; Tiziani, H J
1998-05-01
We propose a new method for measuring large-object deformations byusing temporal evolution of the speckles in speckleinterferometry. The principle of the method is that by deformingthe object continuously, one obtains fluctuations in the intensity ofthe speckle. A large number of frames of the object motion arecollected to be analyzed later. The phase data for whole-objectdeformation are then retrieved by inverse Fourier transformation of afiltered spectrum obtained by Fourier transformation of thesignal. With this method one is capable of measuring deformationsof more than 100 mum, which is not possible using conventionalelectronic speckle pattern interferometry. We discuss theunderlying principle of the method and the results of theexperiments. Some nondestructive testing results are alsopresented.
NASA Technical Reports Server (NTRS)
Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.
1991-01-01
A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).
Lu, Bo; Lu, Haibin; Palta, Jatinder
2010-05-12
The objective of this study was to evaluate the effect of kilovoltage cone-beam computed tomography (CBCT) on registration accuracy and image qualities with a reduced number of planar projections used in volumetric imaging reconstruction. The ultimate goal is to evaluate the possibility of reducing the patient dose while maintaining registration accuracy under different projection-number schemes for various clinical sites. An Elekta Synergy Linear accelerator with an onboard CBCT system was used in this study. The quality of the Elekta XVI cone-beam three-dimensional volumetric images reconstructed with a decreasing number of projections was quantitatively evaluated by a Catphan phantom. Subsequently, we tested the registration accuracy of imaging data sets on three rigid anthropomorphic phantoms and three real patient sites under the reduced projection-number (as low as 1/6th) reconstruction of CBCT data with different rectilinear shifts and rota-tions. CBCT scan results of the Catphan phantom indicated the CBCT images got noisier when the number of projections was reduced, but their spatial resolution and uniformity were hardly affected. The maximum registration errors under the small amount transformation of the reference CT images were found to be within 0.7 mm translation and 0.3 masculine rotation. However, when the projection number was lower than one-fourth of the full set with a large amount of transformation of reference CT images, the registration could easily be trapped into local minima solutions for a nonrigid anatomy. We concluded, by using projection-number reduction strategy under conscientious care, imaging-guided localization procedure could achieve a lower patient dose without losing the registration accuracy for various clinical sites and situations. A faster scanning time is the main advantage compared to the mA decrease-based, dose-reduction method.
Saito, Takaya; Rehmsmeier, Marc
2015-01-01
Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets.
The evaluation of interstitial Cajal cells distribution in non-tumoral colon disorders.
Becheanu, G; Manuc, M; Dumbravă, Mona; Herlea, V; Hortopan, Monica; Costache, Mariana
2008-01-01
Interstitial cells of Cajal (ICC) are pacemakers that generate electric waves recorded from the gut and are important for intestinal motility. The aim of the study was to evaluate the distribution of interstitial cells of Cajal in colon specimens from patients with idiopathic chronic pseudo-obstruction and other non-tumoral colon disorders as compared with samples from normal colon. The distribution pattern of ICC in the normal and pathological human colon was evaluated by immunohistochemistry using antibodies for CD117, CD34, and S-100. In two cases with intestinal chronic idiopathic pseudo-obstruction we found a diffuse or focal reducing number of Cajal cells, the loss of immunoreactivity for CD117 being correlated with loss of immunoreactivity for CD34 marker. Our study revealed that the number of interstitial cells of Cajal also decrease in colonic diverticular disease and Crohn disease (p<0.05), whereas the number of enteric neurones appears to be normal. These findings might explain some of the large bowel motor abnormalities known to occur in these disorders. Interstitial Cajal cells may play an important role in pathogenesis and staining for CD117 on transmural intestinal surgical biopsies could allow a more extensive diagnosis in evaluation of chronic intestinal pseudo-obstruction.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
NASA Astrophysics Data System (ADS)
Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram
2010-05-01
Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.
Properties of targeted preamplification in DNA and cDNA quantification.
Andersson, Daniel; Akrap, Nina; Svec, David; Godfrey, Tony E; Kubista, Mikael; Landberg, Göran; Ståhlberg, Anders
2015-01-01
Quantification of small molecule numbers often requires preamplification to generate enough copies for accurate downstream enumerations. Here, we studied experimental parameters in targeted preamplification and their effects on downstream quantitative real-time PCR (qPCR). To evaluate different strategies, we monitored the preamplification reaction in real-time using SYBR Green detection chemistry followed by melting curve analysis. Furthermore, individual targets were evaluated by qPCR. The preamplification reaction performed best when a large number of primer pairs was included in the primer pool. In addition, preamplification efficiency, reproducibility and specificity were found to depend on the number of template molecules present, primer concentration, annealing time and annealing temperature. The amount of nonspecific PCR products could also be reduced about 1000-fold using bovine serum albumin, glycerol and formamide in the preamplification. On the basis of our findings, we provide recommendations how to perform robust and highly accurate targeted preamplification in combination with qPCR or next-generation sequencing.
Evaluation of the eigenvalue method in the solution of transient heat conduction problems
NASA Astrophysics Data System (ADS)
Landry, D. W.
1985-01-01
The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.
Biometric Attendance and Big Data Analysis for Optimizing Work Processes.
Verma, Neetu; Xavier, Teenu; Agrawal, Deepak
2016-01-01
Although biometric attendance management is available, large healthcare organizations have difficulty in big data analysis for optimization of work processes. The aim of this project was to assess the implementation of a biometric attendance system and its utility following big data analysis. In this prospective study the implementation of biometric system was evaluated over 3 month period at our institution. Software integration with other existing systems for data analysis was also evaluated. Implementation of the biometric system could be successfully done over a two month period with enrollment of 10,000 employees into the system. However generating reports and taking action this large number of staff was a challenge. For this purpose software was made for capturing the duty roster of each employee and integrating it with the biometric system and adding an SMS gateway. This helped in automating the process of sending SMSs to each employee who had not signed in. Standalone biometric systems have limited functionality in large organizations unless it is meshed with employee duty roster.
Passalacqua, Thais Gaban; Dutra, Luiz Antonio; de Almeida, Letícia; Velásquez, Angela Maria Arenas; Torres, Fabio Aurelio Esteves; Yamasaki, Paulo Renato; dos Santos, Mariana Bastos; Regasini, Luis Octavio; Michels, Paul A M; Bolzani, Vanderlan da Silva; Graminha, Marcia A S
2015-08-15
Chalcones form a class of compounds that belong to the flavonoid family and are widely distributed in plants. Their simple structure and the ease of preparation make chalcones attractive scaffolds for the synthesis of a large number of derivatives enabling the evaluation of the effects of different functional groups on biological activities. In this Letter, we report the successful synthesis of a series of novel prenylated chalcones via Claisen-Schmidt condensation and the evaluation of their effect on the viability of the Trypanosomatidae parasites Leishmania amazonensis, Leishmania infantum and Trypanosoma cruzi. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation of Alkaline Cleaner Materials
NASA Technical Reports Server (NTRS)
Partz, Earl
1998-01-01
Alkaline cleaners used to process aluminum substrates have contained chromium as the corrosion inhibitor. Chromium is a hazardous substance whose use and control are described by environmental laws. Replacement materials that have the characteristics of chromated alkaline cleaners need to be found that address both the cleaning requirements and environmental impacts. This report will review environmentally friendly candidates evaluated as non-chromium alkaline cleaner replacements and methods used to compare those candidates one versus another. The report will also list characteristics used to select candidates based on their declared contents. It will also describe and evaluate methods used to discriminate among the large number of prospective candidates.
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Handling qualities criteria for the space shuttle orbiter during the terminal phase of flight
NASA Technical Reports Server (NTRS)
Stapleford, R. L.; Klein, R. H.; Hob, R. H.
1972-01-01
It was found that large portions of the military handling qualities specification are directly applicable. However a number of additional and substitute criteria are recommended for areas not covered or inadequately covered in the military specification. Supporting pilot/vehicle analyses and simulation experiments were conducted and are described. Results are also presented of analytical and simulator evaluations of three specific interim Orbiter designs which provided a test of the proposed handling qualities criteria. The correlations between the analytical and experimental evaluations were generally excellent.
Analysis of rockbolt performance at the Waste Isolation Pilot Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrill, L.J.; Francke, C.T.; Saeb, S.
Rockbolt failures at the Waste Isolation Pilot Plant have been recorded since 1990 and are categorized in terms of mode of failure. The failures are evaluated in terms of physical location of installation within the mine, local excavation geometry and stratigraphy, proximity to other excavations or shafts, and excavation age. The database of failures has revealed discrete ares of the mine containing relatively large numbers of failures. The results of metallurgical analyses and standard rockbolt load testing have generally been in agreement with the in situ evaluations.
Tune Evaluation From Phased BPM Turn-By-Turn Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexahin, Y.; Gianfelice-Wendt, E.; Marsh, W.
2010-05-18
In fast ramping synchrotrons like the Fermilab Booster the conventional methods of betatron tune evaluation from the turn-by-turn data may not work due to rapid changes of the tunes (sometimes in a course of a few dozens of turns) and a high level of noise. We propose a technique based on phasing of signals from a large number of BPMs which significantly increases the signal to noise ratio. Implementation of the method in the Fermilab Booster control system is described and some measurement results are presented.
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
ERIC Educational Resources Information Center
Peng, Ching-Huai
2008-01-01
After the 2008 Olympics is concluded and commentators and journalists internationally begin the process of evaluating Beijing's performance as the host city, one of the primary elements to be analyzed will be the quality of visitor service provided by more than 70,000 volunteers. Given the large number of Chinese students who have chosen a Western…
Using a Model of Analysts' Judgments to Augment an Item Calibration Process
ERIC Educational Resources Information Center
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling
2015-01-01
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
ERIC Educational Resources Information Center
Quinn, Diana
2010-01-01
Purpose: The purpose of this paper is to examine current approaches to teaching used in academic development services and consider the diversity of their learners (academic faculty). Faculty engagement with teaching issues and innovations remains a concern for the higher education sector. The academic population contains large numbers of…
ERIC Educational Resources Information Center
Niculovic, Milica; Zivkovic, Dragana; Manasijevic, Dragan; Strbac, Nada
2012-01-01
A large number of criteria for evaluating Internet addiction have been developed recently. The research of Internet addiction among students of the Technical faculty in Bor, University of Belgrade has been conducted and its results are presented in this paper. The study included 270 students using criteria of Young's Internet Addiction Test. In…
USDA-ARS?s Scientific Manuscript database
Baled silage has become a popular form of forage conservation; however, many practical management questions have not been investigated thoroughly. Our objectives were to evaluate the number of polyethylene wrapping layers and the presence (OB) or absence (SUN) of an O2-limiting barrier within the wr...
ERIC Educational Resources Information Center
Hurwitz, Michael; Mbekeani, Preeya P.; Nipson, Margaret M.; Page, Lindsay C.
2017-01-01
Subtle policy adjustments can induce relatively large "ripple effects." We evaluate a College Board initiative that increased the number of free SAT score reports available to low-income students and changed the time horizon for using these score reports. Using a difference-in-differences analytic strategy, we estimate that targeted…
1:1 Laptop Implications and District Policy Considerations
ERIC Educational Resources Information Center
Sauers, Nicholas J.
2012-01-01
Background. The state of Iowa has seen a drastic increase in the number of schools that provide one laptop for each student. These 1:1 schools have invested large amounts of time and money into becoming a 1:1 school. The current research on 1:1 schools is sparse, and policy makers are actively trying to evaluate those programs. Purpose. To…
ERIC Educational Resources Information Center
Fasihuddin, Heba; Skinner, Geoff; Athauda, Rukshan
2017-01-01
Open learning represents a new form of online learning where courses are provided freely online for large numbers of learners. MOOCs are examples of this form of learning. The authors see an opportunity for personalising open learning environments by adapting to learners' learning styles and providing adaptive support to meet individual learner…
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Physical exercise program for children with bronchial asthma.
Szentágothai, K; Gyene, I; Szócska, M; Osváth, P
1987-01-01
A long-term physical exercise program was established for a large number of children with bronchial asthma. Asthmatic children were first taught to swim on their backs to prevent breathing problems customary for beginners using other strokes. They concurrently participated in gymnasium exercises, and the program was later completed with outdoor running. Program effectiveness was evaluated by monitoring asthmatic symptoms, changes in medication, and changes in the activity and physical fitness of the children. Data collected from 121 children showed that during the first year in the program the number of days with asthmatic symptoms decreased in a large majority of the patients while medication was decreased. School absenteeism and hospitalization dropped markedly. Parental evaluation of the children indicated much improvement in 51.2%, improvement in 40.5%, unchanged condition in 7.4%, and deterioration of general health was only reported in one child (0.8%). The same extent of improvement continued during the second year. The Cooper test was applied for the first time to such an exercise program and indicated that the participating asthmatic children performed as well as a control group of nonasthmatic children, and the cardiovascular efficiency of the asthmatics was actually better.
Bakand, S; Winder, C; Khalil, C; Hayes, A
2005-12-01
Exposure to occupational and environmental contaminants is a major contributor to human health problems. Inhalation of gases, vapors, aerosols, and mixtures of these can cause a wide range of adverse health effects, ranging from simple irritation to systemic diseases. Despite significant achievements in the risk assessment of chemicals, the toxicological database, particularly for industrial chemicals, remains limited. Considering there are approximately 80,000 chemicals in commerce, and an extremely large number of chemical mixtures, in vivo testing of this large number is unachievable from both economical and practical perspectives. While in vitro methods are capable of rapidly providing toxicity information, regulatory agencies in general are still cautious about the replacement of whole-animal methods with new in vitro techniques. Although studying the toxic effects of inhaled chemicals is a complex subject, recent studies demonstrate that in vitro methods may have significant potential for assessing the toxicity of airborne contaminants. In this review, current toxicity test methods for risk evaluation of industrial chemicals and airborne contaminants are presented. To evaluate the potential applications of in vitro methods for studying respiratory toxicity, more recent models developed for toxicity testing of airborne contaminants are discussed.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Lahat, G; Lubezky, N; Gerstenhaber, F; Nizri, E; Gysi, M; Rozenek, M; Goichman, Y; Nachmany, I; Nakache, R; Wolf, I; Klausner, J M
2016-09-29
We evaluated the prognostic significance and universal validity of the total number of evaluated lymph nodes (ELN), number of positive lymph nodes (PLN), lymph node ratio (LNR), and log odds of positive lymph nodes (LODDS) in a relatively large and homogenous cohort of surgically treated pancreatic ductal adenocarcinoma (PDAC) patients. Prospectively accrued data were retrospectively analyzed for 282 PDAC patients who had pancreaticoduodenectomy (PD) at our institution. Long-term survival was analyzed according to the ELN, PLN, LNR, and LODDS. Of these patients, 168 patients (59.5 %) had LN metastasis (N1). Mean ELN and PLN were 13.5 and 1.6, respectively. LN positivity correlated with a greater number of evaluated lymph nodes; positive lymph nodes were identified in 61.4 % of the patients with ELN ≥ 13 compared with 44.9 % of the patients with ELN < 13 (p = 0.014). Median overall survival (OS) and 5-year OS rate were higher in N0 than in N1 patients, 22.4 vs. 18.7 months and 35 vs. 11 %, respectively (p = 0.008). Mean LNR was 0.12; 91 patients (54.1 %) had LNR < 0.3. Among the N1 patients, median OS was comparable in those with LNR ≥ 0.3 vs. LNR < 0.3 (16.7 vs. 14.1 months, p = 0.950). Neither LODDS nor various ELN and PLN cutoff values provided more discriminative information within the group of N1 patients. Our data confirms that lymph node positivity strongly reflects PDAC biology and thus patient outcome. While a higher number of evaluated lymph nodes may provide a more accurate nodal staging, it does not have any prognostic value among N1 patients. Similarly, PLN, LNR, and LODDS had limited prognostic relevance.
Modeling the effect of transient populations on epidemics in Washington DC.
Parikh, Nidhi; Youssef, Mina; Swarup, Samarth; Eubank, Stephen
2013-11-06
Large numbers of transients visit big cities, where they come into contact with many people at crowded areas. However, epidemiological studies have not paid much attention to the role of this subpopulation in disease spread. We evaluate the effect of transients on epidemics by extending a synthetic population model for the Washington DC metro area to include leisure and business travelers. A synthetic population is obtained by combining multiple data sources to build a detailed minute-by-minute simulation of population interaction resulting in a contact network. We simulate an influenza-like illness over the contact network to evaluate the effects of transients on the number of infected residents. We find that there are significantly more infections when transients are considered. Since much population mixing happens at major tourism locations, we evaluate two targeted interventions: closing museums and promoting healthy behavior (such as the use of hand sanitizers, covering coughs, etc.) at museums. Surprisingly, closing museums has no beneficial effect. However, promoting healthy behavior at the museums can both reduce and delay the epidemic peak. We analytically derive the reproductive number and perform stability analysis using an ODE-based model.
Modeling the effect of transient populations on epidemics in Washington DC
NASA Astrophysics Data System (ADS)
Parikh, Nidhi; Youssef, Mina; Swarup, Samarth; Eubank, Stephen
2013-11-01
Large numbers of transients visit big cities, where they come into contact with many people at crowded areas. However, epidemiological studies have not paid much attention to the role of this subpopulation in disease spread. We evaluate the effect of transients on epidemics by extending a synthetic population model for the Washington DC metro area to include leisure and business travelers. A synthetic population is obtained by combining multiple data sources to build a detailed minute-by-minute simulation of population interaction resulting in a contact network. We simulate an influenza-like illness over the contact network to evaluate the effects of transients on the number of infected residents. We find that there are significantly more infections when transients are considered. Since much population mixing happens at major tourism locations, we evaluate two targeted interventions: closing museums and promoting healthy behavior (such as the use of hand sanitizers, covering coughs, etc.) at museums. Surprisingly, closing museums has no beneficial effect. However, promoting healthy behavior at the museums can both reduce and delay the epidemic peak. We analytically derive the reproductive number and perform stability analysis using an ODE-based model.
Pucci, Stefano; D'Alò, Simona; De Pasquale, Tiziana; Illuminati, Ilenia; Makri, Elena; Incorvaia, Cristoforo
2015-01-01
In the few studies available, the risk of developing systemic reactions (SR) to hymenoptera stings in patients with previous large local reactions (LLRs) to stings ranges from 0 to 7 %. We evaluated both retrospectively and prospectively the risk of SRs in patients with LLRs to stings. An overall number of 477 patients, 396 with an SR as the first manifestation of allergy and 81 with a history of only LLRs after hymenoptera stings, were included in the study. All patients had clinical history and allergy testing (skin tests and/or specific IgE) indicative of allergy to venom of only one kind of Hymenoptera. Of the 81 patient with LLRs, 53 were followed-up for 3 years by annual control visits, while the 396 patients with SR were evaluated retrospectively. Among the 396 patients with an SR, only 17 (4.2 %) had had a previous LLR as debut of allergy, after an history of normal local reactions to Hymenoptera stings. All the 81 patients with a history of only LLRs had previously had at least two LLRs, with an overall number of 238 stings and no SR. Among the 53 patients who were prospectively evaluated we found that 31 of them (58.3 %) were restung by the same type of insect, with an overall number of 59 stings, presenting only LLRs and no SR. Our findings confirm that patients with repeated LLRs to stings had no risk of SR, while a single LLR does not exclude such risk. This has to be considered in the management of patients with LLRs.
Gatta, L; Scarpignato, C; Fiorini, G; Belsey, J; Saracino, I M; Ricci, C; Vaira, D
2018-05-01
The increasing prevalence of strains resistant to antimicrobial agents is a critical issue in the management of Helicobacter pylori (H. pylori) infection. (1) To evaluate the prevalence of primary resistance to clarithromycin, metronidazole and levofloxacin (2) to assess the effectiveness of sequential therapy on resistant strains (3) to identify the minimum number of subjects to enrol for evaluating the effectiveness of an eradication regimen in patients harbouring resistant strains. Consecutive 1682 treatment naïve H. pylori-positive patients referred for upper GI endoscopy between 2010 and 2015 were studied and resistances assessed by E-test. Sequential therapy was offered, effectiveness evaluated and analysed. H. pylori-primary resistance to antimicrobials tested was high, and increased between 2010 and 2015. Eradication rates were (estimates and 95% CIs): 97.3% (95.6-98.4) in strains susceptible to clarithromycin and metronidazole; 96.1% (91.7-98.2) in strains resistant to metronidazole but susceptible to clarithromycin; 93.4% (88.2-96.4) in strains resistant to clarithromycin but susceptible to metronidazole; 83.1% (77.7-87.3) in strains resistant to clarithromycin and metronidazole. For any treatment with a 75%-85% eradication rate, some 98-144 patients with resistant strains need to be studied to get reliable information on effectiveness in these patients. H. pylori-primary resistance is increasing and represents the most critical factor affecting effectiveness. Sequential therapy eradicated 83% of strains resistant to clarithromycin and metronidazole. Reliable estimates of the effectiveness of a given regimen in patients harbouring resistant strains can be obtained only by assessing a large number of strains. © 2018 John Wiley & Sons Ltd.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Assessment of dynamic closure for premixed combustion large eddy simulation
NASA Astrophysics Data System (ADS)
Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan
2015-09-01
Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.
Biomass Burning Organic Aerosol as a Modulator of Droplet Number in the Southern Atlantic
NASA Astrophysics Data System (ADS)
Kacarab, M.; Howell, S. G.; Small Griswold, J. D.; Thornhill, K. L., II; Wood, R.; Redemann, J.; Nenes, A.
2017-12-01
Aerosols play a significant yet highly variable role in local and global air quality and climate. They act as cloud condensation nuclei (CCN) and both scatter and absorb radiation, lending a large source of uncertainty to climate predictions. Biomass burning organic aerosol (BBOA) can drastically elevate CCN concentrations, but the response in cloud droplet number may be suppressed or even reversed due to low supersaturations that develop from strong competition for water vapor. Constraining droplet response to BBOA is a key factor to understanding aerosol-cloud interactions. The southeastern Atlantic (SEA) cloud deck off the west coast of central Africa is a prime opportunity to study these cloud-BBOA interactions for marine stratocumulus as during winter in the southern hemisphere the SEA cloud deck is overlain by a large, optically thick BBOA plume. The NASA ObseRvations of Aerosols above Clouds and their intEractionS (ORACLES) study focuses on increasing the understanding of how these BBOA affect the SEA cloud deck. Measurements of CCN concentration, aerosol size distribution and composition, updraft velocities, and cloud droplet number in and around the SEA cloud deck and associated BBOA plume were taken aboard the NASA P-3 aircraft during the first two years of the ORACLES campaign in September 2016 and August 2017. Here we evaluate the predicted and observed droplet number sensitivity to the aerosol fluctuations and quantify, using the data, the drivers of droplet number variability (vertical velocity or aerosol properties) as a function of biomass burning plume characteristics. Over the course of the campaign, different levels of BBOA influence in the marine boundary layer (MBL) were observed, allowing for comparison of cloud droplet number, hygroscopicity parameter (κ), and maximum in-cloud supersaturation over a range of "clean" and "dirty" conditions. Droplet number sensitivity to aerosol concentration, κ, and vertical updraft velocities are also evaluated. Generally, an increase in BBOA led to increased droplet number along with decreased κ and maximum in-cloud supersaturation (leading to an increase in competition for water vapor). This work seeks to contribute to an increased understanding of how CCN and aerosol properties affect the radiative and hydrological properties and impact of the cloud.
Priority-setting and hospital strategic planning: a qualitative case study.
Martin, Douglas; Shulman, Ken; Santiago-Sorrell, Patricia; Singer, Peter
2003-10-01
To describe and evaluate the priority-setting element of a hospital's strategic planning process. Qualitative case study and evaluation against the conditions of 'accountability for reasonableness' of a strategic planning process at a large urban university-affiliated hospital. The hospital's strategic planning process met the conditions of 'accountability for reasonableness' in large part. Specifically: the hospital based its decisions on reasons (both information and criteria) that the participants felt were relevant to the hospital; the number and type of participants were very extensive; the process, decisions and reasons were well communicated throughout the organization, using multiple communication vehicles; and the process included an ethical framework linked to an effort to evaluate and improve the process. However, there were opportunities to improve the process, particularly by giving participants more time to absorb the information relevant to priority-setting decisions, more time to take difficult decisions and some means to appeal or revise decisions. A case study linked to an evaluation using 'accountability for reasonableness' can serve to improve priority-setting in the context of hospital strategic planning.
Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.
2002-01-01
In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.
Odd versus even: a scientific study of the ‘rules’ of plating
Michel, Charles; Spence, Charles
2016-01-01
We report on the results of a series of large-scale computer-based preference tests (conducted at The Science Museum in London and online) that evaluated the widely-held belief that food should be plated in odd rather than even numbers of elements in order to maximize the visual appeal of a dish. Participants were presented with pairs of plates of food showing odd versus even number of seared scallops (3 vs. 4; 1–6 in Experiment 7), arranged in a line, as a polygon or randomly, on either a round or square white plate. No consistent evidence for a preference for odd or even numbers of food items was found, thus questioning the oft-made assertion that odd number of items on a plate looks better than an even number. The implications of these results are discussed. PMID:26839741
NASA Technical Reports Server (NTRS)
Scholl, R. E. (Editor)
1979-01-01
Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.
Harper, J G; Fuller, R; Sweeney, D; Waldmann, T
1998-04-01
This paper describes ergonomic issues raised during a project to provide a replacement real-time bus route control system to a large public transport company. Task and system analyses highlighted several deficiencies in the original system architecture, the human-machine interfaces and the general approach to system management. The eventual live prototype replaced the existing original system for a trial evaluation period of several weeks. During this period a number of studies was conducted with the system users in order to measure any improvements the new system, with its ergonomic features, produced over the old. Importantly, the results confirmed that (a) general responsiveness and service quality were improved, and (b) users were more comfortable with the new design. We conclude with a number of caveats which we believe will be useful to any group addressing technology impact in a large organisation.
Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks
Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco
2016-01-01
In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709
The relationship between gastrointestinal motility and gastric dilatation-volvulus in dogs.
Gazzola, Krista M; Nelson, Laura L
2014-09-01
Gastric dilatation-volvulus (GDV) is a devastating disease that most commonly affects large and giant-breed dogs. Though a number of risk factors have been associated with the development of GDV, the etiology of GDV remains unclear. Abnormal gastric motility patterns and delayed gastric emptying have been previously described in dogs following GDV. Work evaluating the effects of gastropexy procedures and changes to gastric motility after experimental GDV has not found the same changes as those found in dogs with naturally occurring GDV. Although the role of abnormal gastric motility in dogs with GDV will need to be clarified with additional research, such study is likely to be facilitated by improved access to and development of noninvasive measurement techniques for the evaluation of gastric emptying and other motility parameters. In particular, the availability of Food and Drug Administration-approved wireless motility devices for the evaluation of gastrointestinal motility is particularly promising in the study of GDV and other functional gastrointestinal diseases of large and giant-breed dogs. Published by Elsevier Inc.
Evaluation of fuel preparation systems for lean premixing-prevaporizing combustors
NASA Technical Reports Server (NTRS)
Dodds, W. J.; Ekstedt, E. E.
1985-01-01
A series of experiments was carried out in order to produce design data for a premixing prevaporizing fuel-air mixture preparation system for aircraft gas turbine engine combustors. The fuel-air mixture uniformity of four different system design concepts was evaluated over a range of conditions representing the cruise operation of a modern commercial turbofan engine. Operating conditions including pressure, temperature, fuel-to-air ratio, and velocity, exhibited no clear effect on mixture uniformity of systems using pressure-atomizing fuel nozzles and large-scale mixing devices. However, the performance of systems using atomizing fuel nozzles and large-scale mixing devices was found to be sensitive to operating conditions. Variations in system design variables were also evaluated and correlated. Mixing uniformity was found to improve with system length, pressure drop, and the number of fuel injection points per unit area. A premixing system capable of providing mixing uniformity to within 15 percent over a typical range of cruise operating conditions is demonstrated.
Recent progress in 3-D imaging of sea freight containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only amore » relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.« less
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise
2015-03-01
We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
Simplified Deployment of Health Informatics Applications by Providing Docker Images.
Löbe, Matthias; Ganslandt, Thomas; Lotzmann, Lydia; Mate, Sebastian; Christoph, Jan; Baum, Benjamin; Sariyar, Murat; Wu, Jie; Stäubert, Sebastian
2016-01-01
Due to the specific needs of biomedical researchers, in-house development of software is widespread. A common problem is to maintain and enhance software after the funded project has ended. Even if many tools are made open source, only a couple of projects manage to attract a user basis large enough to ensure sustainability. Reasons for this include complex installation and configuration of biomedical software as well as an ambiguous terminology of the features provided; all of which make evaluation of software laborious. Docker is a para-virtualization technology based on Linux containers that eases deployment of applications and facilitates evaluation. We investigated a suite of software developments funded by a large umbrella organization for networked medical research within the last 10 years and created Docker containers for a number of applications to support utilization and dissemination.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames
NASA Astrophysics Data System (ADS)
Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.
2017-11-01
Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.
Landsat-4 MSS and Thematic Mapper data quality and information content analysis
NASA Technical Reports Server (NTRS)
Anuta, P. E.; Bartolucci, L. A.; Dean, M. E.; Lozano, D. F.; Malaret, E.; Mcgillem, C. D.; Valdes, J. A.; Valenzuela, C. R.
1984-01-01
Landsat-4 Thematic Mapper and Multispectral Scanner data were analyzed to obtain information on data quality and information content. Geometric evaluations were performed to test band-to-band registration accuracy. Thematic Mapper overall system resolution was evaluated using scene objects which demonstrated sharp high contrast edge responses. Radiometric evaluation included detector relative calibration, effects of resampling, and coherent noise effects. Information content evaluation was carried out using clustering, principal components, transformed divergence separability measure, and numerous supervised classifiers on data from Iowa and Illinois. A detailed spectral class analysis (multispectral classification) was carried out on data from the Des Moines, IA area to compare the information content of the MSS and TM for a large number of scene classes.
Results of the NFIRAOS RTC trade study
NASA Astrophysics Data System (ADS)
Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent L.; Gilles, Luc; Herriot, Glen; Kerley, Daniel A.; Ljusic, Zoran; McVeigh, Eric A.; Prior, Robert; Smith, Malcolm; Wang, Lianqi
2014-07-01
With two large deformable mirrors with a total of more than 7000 actuators that need to be driven from the measurements of six 60x60 LGS WFSs (total 1.23Mpixels) at 800Hz with a latency of less than one frame, NFIRAOS presents an interesting real-time computing challenge. This paper reports on a recent trade study to evaluate which current technology could meet this challenge, with the plan to select a baseline architecture by the beginning of NFIRAOS construction in 2014. We have evaluated a number of architectures, ranging from very specialized layouts with custom boards to more generic architectures made from commercial off-the-shelf units (CPUs with or without accelerator boards). For each architecture, we have found the most suitable algorithm, mapped it onto the hardware and evaluated the performance through benchmarking whenever possible. We have evaluated a large number of criteria, including cost, power consumption, reliability and flexibility, and proceeded with scoring each architecture based on these criteria. We have found that, with today's technology, the NFIRAOS requirements are well within reach of off-the-shelf commercial hardware running a parallel implementation of the straightforward matrix-vector multiply (MVM) algorithm for wave-front reconstruction. Even accelerators such as GPUs and Xeon Phis are no longer necessary. Indeed, we have found that the entire NFIRAOS RTC can be handled by seven 2U high-end PC-servers using 10GbE connectivity. Accelerators are only required for the off-line process of updating the matrix control matrix every ~10s, as observing conditions change.
Evaluating the performance of free-formed surface parts using an analytic network process
NASA Astrophysics Data System (ADS)
Qian, Xueming; Ma, Yanqiao; Liang, Dezhi
2018-03-01
To successfully design parts with a free-formed surface, the critical issue of how to evaluate and select a favourable evaluation strategy before design is raised. The evaluation of free-formed surface parts is a multiple criteria decision-making (MCDM) problem that requires the consideration of a large number of interdependent factors. The analytic network process (ANP) is a relatively new MCDM method that can systematically deal with all kinds of dependences. In this paper, the factors, which come from the life-cycle and influence the design of free-formed surface parts, are proposed. After analysing the interdependence among these factors, a Hybrid ANP (HANP) structure for evaluating the part’s curved surface is constructed. Then, a HANP evaluation of an impeller is presented to illustrate the application of the proposed method.
Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.
Burdorf, A
1995-02-01
The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.
Mejía-Benítez, María A; Bonnefond, Amélie; Yengo, Loïc; Huyvaert, Marlène; Dechaume, Aurélie; Peralta-Romero, Jesús; Klünder-Klünder, Miguel; García Mena, Jaime; El-Sayed Moustafa, Julia S; Falchi, Mario; Cruz, Miguel; Froguel, Philippe
2015-02-01
Childhood obesity is a major public health problem in Mexico, affecting one in every three children. Genome-wide association studies identified genetic variants associated with childhood obesity, but a large missing heritability remains to be elucidated. We have recently shown a strong association between a highly polymorphic copy number variant encompassing the salivary amylase gene (AMY1 also known as AMY1A) and obesity in European and Asian adults. In the present study, we aimed to evaluate the association between AMY1 copy number and obesity in Mexican children. We evaluated the number of AMY1 copies in 597 Mexican children (293 obese children and 304 normal weight controls) through highly sensitive digital PCR. The effect of AMY1 copy number on obesity status was assessed using a logistic regression model adjusted for age and sex. We identified a marked effect of AMY1 copy number on reduced risk of obesity (OR per estimated copy 0.84, with the number of copies ranging from one to 16 in this population; p = 4.25 × 10(-6)). The global association between AMY1 copy number and reduced risk of obesity seemed to be mostly driven by the contribution of the highest AMY1 copy number. Strikingly, all children with >10 AMY1 copies were normal weight controls. Salivary amylase initiates the digestion of dietary starch, which is highly consumed in Mexico. Our current study suggests putative benefits of high number of AMY1 copies (and related production of salivary amylase) on energy metabolism in Mexican children.
HammerCloud: A Stress Testing System for Distributed Analysis
NASA Astrophysics Data System (ADS)
van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo
2011-12-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).
ERIC Educational Resources Information Center
Fuchs, Douglas; Hendricks, Emma; Walsh, Meagan E.; Fuchs, Lynn S.; Gilbert, Jennifer K.; Zhang Tracy, Wen; Patton, Samuel, III.; Davis, Nicole; Kim, Wooliya; Elleman, Amy M.; Peng, Peng
2018-01-01
We conducted a 14-week experimental study of 2 versions of a relatively comprehensive RC intervention that involved 50 classroom teachers, 15 tutors, and 120 children drawn in equal proportions from grades 3 and 5 in 13 schools in a large urban school district. Students were randomly assigned in equal numbers to the 2 tutoring conditions and a…
ERIC Educational Resources Information Center
Fuchs, Douglas; Hendricks, Emma; Walsh, Meagan E.; Fuchs, Lynn S.; Gilbert, Jennifer K.; Zhang Tracy, Wen; Patton, Samuel, III; Davis-Perkins, Nicole; Kim, Wooliya; Elleman, Amy M.; Peng, Peng
2018-01-01
We conducted a 14-week experimental study of 2 versions of a relatively comprehensive RC intervention that involved 50 classroom teachers, 15 tutors, and 116 children drawn in equal proportions from grades 3 and 5 in 13 schools in a large urban school district. Students were randomly assigned in equal numbers to the two tutoring conditions and a…
ERIC Educational Resources Information Center
Grimm, Anne; Mrosek, Thorsten; Martinsohn, Anna; Schulte, Andreas
2011-01-01
Although a large number of different organisations offer various forest education programmes within Germany, specific information (i.e., sectoral and programme content and provision at a state level) is lacking. This study used a survey of all 61 forest education organisations (43 respondents) in the state of North Rhine-Westphalia, Germany, to…
C.R. Lane; E. Hobden; L. Laurenson; V.C. Barton; K.J.D. Hughes; H. Swan; N. Boonham; A.J. Inman
2008-01-01
Plant health regulations to prevent the introduction and spread of Phytophthora ramorum and P. kernoviae require rapid, cost effective diagnostic methods for screening large numbers of plant samples at the time of inspection. Current on-site techniques require expensive equipment, considerable expertise and are not suited for plant...
Steven R. Lawson; Robert E. Manning
2001-01-01
Tradeoffs are an inherent part of many of the decisions faced by outdoor recreation managers. For example, decisions concerning the social carrying capacity of popular attraction sites involve tradeoffs between limiting visitor use to ensure a high quality experience and allowing high levels of visitor use to ensure that large numbers of visitors retain access to park...
ERIC Educational Resources Information Center
Fullen, Mark D.
2009-01-01
The numbers of workers in the residential construction industry are on the rise. Falls have continually been the largest contributor to residential construction worker deaths and injuries. These workers are largely self-employed or working for small companies. These individuals are difficult to reach through traditional methods. This research…
ERIC Educational Resources Information Center
Wheeler, Marc E.; Keller, Thomas E.; DuBois, David L.
2010-01-01
Between 2007 and 2009, reports were released on the results of three separate large-scale random assignment studies of the effectiveness of school-based mentoring programs for youth. The studies evaluated programs implemented by Big Brothers Big Sisters of America (BBBSA) affiliates (Herrera et al., 2007), Communities In Schools of San Antonio,…
Rural Outreach Chemistry for Kids (R.O.C.K.): The Program and Its Evaluation
ERIC Educational Resources Information Center
Lynch, Mark; Zovinka, Edward P.; Zhang, Lening; Hruska, Jenna L.; Lee, Angela
2005-01-01
The Rural Outreach Chemistry for Kids (R.O.C.K.) program was designed as a service-learning project for students at Saint Francis University to serve the local communities by organizing chemistry activities in high schools. It was initiated in 1995 and has involved a large number of Saint Francis University students and local high school students.…
ERIC Educational Resources Information Center
Guarino, Cassandra M.
2013-01-01
The push for accountability in public schooling has extended to the measurement of teacher performance, accelerated by federal efforts through Race to the Top. Currently, a large number of states and districts across the country are computing measures of teacher performance based on the standardized test scores of their students and using them in…
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Sakai, Yasuhiko; Nagata, Koji; Ito, Yasumasa
2016-04-01
Spatially developing planar jets with passive scalar transports are simulated for various Reynolds (Re = 2200, 7000, and 22 000) and Schmidt numbers (Sc = 1, 4, 16, 64, and 128) by the implicit large eddy simulation (ILES) using low-pass filtering as an implicit subgrid-scale model. The budgets of resolved turbulent kinetic energy k and scalar variance < {φ }\\prime 2> are explicitly evaluated from the ILES data except for the dissipation terms, which are obtained from the balance in the transport equations. The budgets of k and < {φ }\\prime 2> in the ILES agree well with the DNS and experiments for both high and low Re cases. The streamwise decay of the mean turbulent kinetic energy dissipation rate obeys the power low obtained by the scaling argument. The mechanical-to-scalar timescale ratio C ϕ is evaluated in the self-similar region. For the high Re case, C ϕ is close to the isotropic value (C ϕ = 2) near the jet centerline. However, when Re is not large, C ϕ is smaller than 2 and depends on the Schmidt number. The T/NT interface is also investigated by using the scalar isosurface. The velocity and scalar fields near the interface depend on the interface orientation for all Re. The velocity toward the interface is observed near the interface facing in the streamwise, cross-streamwise, and spanwise directions in the planar jet in the resolved velocity field.
Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John
2014-01-01
Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7 μg/mL/24 h were obtained which, compared well with control ELS (viability −98.1%±0.9%; viable cell numbers −18.3±1.0 million nuclei/mL alginate; and protein secretion −18.7±1.8 μg/mL/24 h). Large volume GMP cryopreservation of ELS is possible with good functional recovery using the VIA Freeze and may also be applied to other regenerative medicine applications. PMID:24410575
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
NASA Astrophysics Data System (ADS)
Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.
2016-12-01
As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS framework to discover key tradeoffs within the LSRB system.
Wang, Chang; Ren, Qiongqiong; Qin, Xin
2018-01-01
Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.
Wang, Chang; Ren, Qiongqiong; Qin, Xin; Yu, Yi
2018-01-01
Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.
Yang, Jinliang; Jiang, Haiying; Yeh, Cheng-Ting; Yu, Jianming; Jeddeloh, Jeffrey A; Nettleton, Dan; Schnable, Patrick S
2015-11-01
Although approaches for performing genome-wide association studies (GWAS) are well developed, conventional GWAS requires high-density genotyping of large numbers of individuals from a diversity panel. Here we report a method for performing GWAS that does not require genotyping of large numbers of individuals. Instead XP-GWAS (extreme-phenotype GWAS) relies on genotyping pools of individuals from a diversity panel that have extreme phenotypes. This analysis measures allele frequencies in the extreme pools, enabling discovery of associations between genetic variants and traits of interest. This method was evaluated in maize (Zea mays) using the well-characterized kernel row number trait, which was selected to enable comparisons between the results of XP-GWAS and conventional GWAS. An exome-sequencing strategy was used to focus sequencing resources on genes and their flanking regions. A total of 0.94 million variants were identified and served as evaluation markers; comparisons among pools showed that 145 of these variants were statistically associated with the kernel row number phenotype. These trait-associated variants were significantly enriched in regions identified by conventional GWAS. XP-GWAS was able to resolve several linked QTL and detect trait-associated variants within a single gene under a QTL peak. XP-GWAS is expected to be particularly valuable for detecting genes or alleles responsible for quantitative variation in species for which extensive genotyping resources are not available, such as wild progenitors of crops, orphan crops, and other poorly characterized species such as those of ecological interest. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
An index of biological integrity (IBI) for Pacific Northwest rivers
Mebane, C.A.; Maret, T.R.; Hughes, R.M.
2003-01-01
The index of biotic integrity (IBI) is a commonly used measure of relative aquatic ecosystem condition; however, its application to coldwater rivers over large geographic areas has been limited. A seven-step process was used to construct and test an IBI applicable to fish assemblages in coldwater rivers throughout the U.S. portion of the Pacific Northwest. First, fish data from the region were compiled from previous studies and candidate metrics were selected. Second, reference conditions were estimated from historical reports and minimally disturbed reference sites in the region. Third, data from the upper Snake River basin were used to test metrics and develop the initial index. Fourth, candidate metrics were evaluated for their redundancy, variability, precision, and ability to reflect a wide range of conditions while distinguishing reference sites from disturbed sites. Fifth, the selected metrics were standardized by being scored continuously from 0 to 1 and then weighted as necessary to produce an IBI ranging from 0 to 100. The resulting index included 10 metrics: number of native coldwater species, number of age-classes of sculpins Cottus spp., percentage of sensitive native individuals, percentage of coldwater individuals, percentage of tolerant individuals, number of alien species, percentage of common carp Cyprinus carpio individuals, number of selected salmonid age-classes, catch per unit effort of coldwater individuals, and percentage of individuals with selected anomalies. Sixth, the IBI responses were tested with additional data sets from throughout the Pacific Northwest. Last, scores from two minimally disturbed reference rivers were evaluated for longitudinal gradients along the river continuum. The IBI responded to environmental disturbances and was spatially and temporally stable at over 150 sites in the Pacific Northwest. The results support its use across a large geographic area to describe the relative biological condition of coolwater and coldwater rivers with low species richness.
Kolchinsky, A; Lourenço, A; Li, L; Rocha, L M
2013-01-01
Drug-drug interaction (DDI) is a major cause of morbidity and mortality. DDI research includes the study of different aspects of drug interactions, from in vitro pharmacology, which deals with drug interaction mechanisms, to pharmaco-epidemiology, which investigates the effects of DDI on drug efficacy and adverse drug reactions. Biomedical literature mining can aid both kinds of approaches by extracting relevant DDI signals from either the published literature or large clinical databases. However, though drug interaction is an ideal area for translational research, the inclusion of literature mining methodologies in DDI workflows is still very preliminary. One area that can benefit from literature mining is the automatic identification of a large number of potential DDIs, whose pharmacological mechanisms and clinical significance can then be studied via in vitro pharmacology and in populo pharmaco-epidemiology. We implemented a set of classifiers for identifying published articles relevant to experimental pharmacokinetic DDI evidence. These documents are important for identifying causal mechanisms behind putative drug-drug interactions, an important step in the extraction of large numbers of potential DDIs. We evaluate performance of several linear classifiers on PubMed abstracts, under different feature transformation and dimensionality reduction methods. In addition, we investigate the performance benefits of including various publicly-available named entity recognition features, as well as a set of internally-developed pharmacokinetic dictionaries. We found that several classifiers performed well in distinguishing relevant and irrelevant abstracts. We found that the combination of unigram and bigram textual features gave better performance than unigram features alone, and also that normalization transforms that adjusted for feature frequency and document length improved classification. For some classifiers, such as linear discriminant analysis (LDA), proper dimensionality reduction had a large impact on performance. Finally, the inclusion of NER features and dictionaries was found not to help classification.
Zago, Laure; Badets, Arnaud
2016-01-01
The goal of the present study was to test whether there is a relationship between manual preference and hand-digit mapping in 369 French adults with similar numbers of right- and left-handers. Manual laterality was evaluated with the finger tapping test to evaluate hand motor asymmetry, and the Edinburgh handedness inventory was used to assess manual preference strength (MPS) and direction. Participants were asked to spontaneously "count on their fingers from 1 to 10" without indications concerning the hand(s) to be used. The results indicated that both MPS and hand motor asymmetry affect the hand-starting preference for counting. Left-handers with a strong left-hand preference (sLH) or left-hand motor asymmetry largely started to count with their left hand (left-starter), while right-handers with a strong right-hand preference (sRH) or right-hand motor asymmetry largely started to count with their right hand (right-starter). Notably, individuals with weak MPS did not show a hand-starting preference. These findings demonstrated that manual laterality contributes to finger counting directionality. Lastly, the results showed a higher proportion of sLH left-starter individuals compared with sRH right-starters, indicating an asymmetric bias of MPS on hand-starting preference. We hypothesize that the higher proportion of sLH left-starters could be explained by the congruence between left-to-right hand-digit mapping and left-to-right mental number line representation that has been largely reported in the literature. Taken together, these results indicate that finger-counting habits integrate biological and cultural information. © The Author(s) 2015.
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
Using enquiry in learning: from vision to reality in higher education.
Horne, Maria; Woodhead, Kath; Morgan, Liz; Smithies, Lynda; Megson, Denise; Lyte, Geraldine
2007-02-01
This paper reports on the contribution of six nurse educators to embed enquiry-led learning in a pre-registration nursing programme. Their focus was to evaluate student and facilitator perspectives of a hybrid model of problem-based learning, a form of enquiry-based learning and to focus on facilitators' perceptions of its longer-term utility with large student groups. Problem-based learning is an established learning strategy in healthcare internationally; however, insufficient evidence of its effectiveness with large groups of pre-registration students exists. Fourth Generation Evaluation was used, applying the Nominal Group Technique and Focus Group interviews, for data collection. In total, four groups representing different branches of pre-registration students (n = 121) and 15 facilitators participated. Students identified seven strengths and six areas for development related to problem-based learning. Equally, analysis of facilitators' discussions revealed several themes related to strengths and challenges. The consensus was that using enquiry aided the development of independent learning and encouraged deeper exploration of nursing and allied subject material. However, problems and frustrations were identified in relation to large numbers of groups, group dynamics, room and library resources and personal development. The implications of these findings for longer-term utility with large student groups are discussed.
Gao, Ting; Yao, Hui; Song, Jingyuan; Zhu, Yingjie; Liu, Chang; Chen, Shilin
2010-10-26
Five DNA regions, namely, rbcL, matK, ITS, ITS2, and psbA-trnH, have been recommended as primary DNA barcodes for plants. Studies evaluating these regions for species identification in the large plant taxon, which includes a large number of closely related species, have rarely been reported. The feasibility of using the five proposed DNA regions was tested for discriminating plant species within Asteraceae, the largest family of flowering plants. Among these markers, ITS2 was the most useful in terms of universality, sequence variation, and identification capability in the Asteraceae family. The species discriminating power of ITS2 was also explored in a large pool of 3,490 Asteraceae sequences that represent 2,315 species belonging to 494 different genera. The result shows that ITS2 correctly identified 76.4% and 97.4% of plant samples at the species and genus levels, respectively. In addition, ITS2 displayed a variable ability to discriminate related species within different genera. ITS2 is the best DNA barcode for the Asteraceae family. This approach significantly broadens the application of DNA barcoding to resolve classification problems in the family Asteraceae at the genera and species levels.
Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes
NASA Astrophysics Data System (ADS)
Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.
2017-09-01
Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.
Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface
Metzen, Jan H.
2013-01-01
A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system's performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays. PMID:23844021
Evaluation Of Rotation Frequency Gas-Diesel Engines When Using Automatic Control System
NASA Astrophysics Data System (ADS)
Zhilenkov, A.; Efremov, A.
2017-01-01
A possibility of quality improvement of stabilization of rotation frequency of the gas-diesels used as prime mover of generator set in the multigenerator units working for abruptly variable load of large power is considered. An evaluation is made on condition of fuzzy controller use developed and described by the authors in a number of articles. An evaluation has shown that theoretically, the revolution range of gas-diesel engine may be reduced at 25-30 times at optimal settings of the controller in all the power range. The results of modeling showing a considerable quality improvement of transient processes in the investigated system at a sharp change of loading are presented in this article.
Głowacka, Katarzyna; Kromdijk, Johannes; Leonelli, Lauriebeth; Niyogi, Krishna K.; Clemente, Tom E.
2016-01-01
Abstract Stable transformation of plants is a powerful tool for hypothesis testing. A rapid and reliable evaluation method of the transgenic allele for copy number and homozygosity is vital in analysing these transformations. Here the suitability of Southern blot analysis, thermal asymmetric interlaced (TAIL‐)PCR, quantitative (q)PCR and digital droplet (dd)PCR to estimate T‐DNA copy number, locus complexity and homozygosity were compared in transgenic tobacco. Southern blot analysis and ddPCR on three generations of transgenic offspring with contrasting zygosity and copy number were entirely consistent, whereas TAIL‐PCR often underestimated copy number. qPCR deviated considerably from the Southern blot results and had lower precision and higher variability than ddPCR. Comparison of segregation analyses and ddPCR of T1 progeny from 26 T0 plants showed that at least 19% of the lines carried multiple T‐DNA insertions per locus, which can lead to unstable transgene expression. Segregation analyses failed to detect these multiple copies, presumably because of their close linkage. This shows the importance of routine T‐DNA copy number estimation. Based on our results, ddPCR is the most suitable method, because it is as reliable as Southern blot analysis yet much faster. A protocol for this application of ddPCR to large plant genomes is provided. PMID:26670088
Evaluating markers for the early detection of cancer: overview of study designs and methods.
Baker, Stuart G; Kramer, Barnett S; McIntosh, Martin; Patterson, Blossom H; Shyr, Yu; Skates, Steven
2006-01-01
The field of cancer biomarker development has been evolving rapidly. New developments both in the biologic and statistical realms are providing increasing opportunities for evaluation of markers for both early detection and diagnosis of cancer. To review the major conceptual and methodological issues in cancer biomarker evaluation, with an emphasis on recent developments in statistical methods together with practical recommendations. We organized this review by type of study: preliminary performance, retrospective performance, prospective performance and cancer screening evaluation. For each type of study, we discuss methodologic issues, provide examples and discuss strengths and limitations. Preliminary performance studies are useful for quickly winnowing down the number of candidate markers; however their results may not apply to the ultimate target population, asymptomatic subjects. If stored specimens from cohort studies with clinical cancer endpoints are available, retrospective studies provide a quick and valid way to evaluate performance of the markers or changes in the markers prior to the onset of clinical symptoms. Prospective studies have a restricted role because they require large sample sizes, and, if the endpoint is cancer on biopsy, there may be bias due to overdiagnosis. Cancer screening studies require very large sample sizes and long follow-up, but are necessary for evaluating the marker as a trigger of early intervention.
NASA Astrophysics Data System (ADS)
Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong
2017-09-01
The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.
NASA Astrophysics Data System (ADS)
Gramsch, E.; Le Nir, G.; Araya, M.; Rubio, M. A.; Moreno, F.; Oyola, P.
2013-02-01
In 2006 a large transformation was carried out on the public transportation system in Santiago de Chile. The original system (before 2006) had hundreds of bus owners with about 7000 diesel buses. The new system has only 13 firms with about 5900 buses which operate in different parts of the city with little overlap between them. In this work we evaluate the impact of the Transantiago system on the black carbon pollution along four roads directly affected by the modification to the transport system. Measurements were carried out during May-July of 2005 (before Transantiago) and June-July of 2007 (after Transantiago). We have used the Wilcoxon rank-sum test to evaluate black carbon concentration in four streets in year 2005 and 2007. The results show that a statistically significant reduction between year 2005 (before Transantiago) and year 2007 (after Transantiago) in Alameda street, which changed from a mean of 18.8 μg m-3 in 2005 to 11.9 μg m-3 in 2007. In this street there was a decrease in the number of buses as well as the number of private vehicles and an improvement in the technology of public transportation between those years. Other two streets (Usach and Departamental) did not change or experienced a small increase in the black carbon concentration in spite of having less flux of buses in 2007. Eliodoro Yañez Street, which did not have public transportation in 2005 or 2007 experienced a 15% increase in the black carbon concentration between those years. Analysis of the data indicates that the change is related to a decrease in the total number of vehicles or the number of other diesel vehicles in the street rather than a decrease in the number of buses only. These results are an indication that in order to decrease pollution near a street is not enough to reduce the number of buses or improve its quality, but to reduce the total number of vehicles.
Zepeda-Mendoza, Marie Lisandra; Bohmann, Kristine; Carmona Baez, Aldo; Gilbert, M Thomas P
2016-05-03
DNA metabarcoding is an approach for identifying multiple taxa in an environmental sample using specific genetic loci and taxa-specific primers. When combined with high-throughput sequencing it enables the taxonomic characterization of large numbers of samples in a relatively time- and cost-efficient manner. One recent laboratory development is the addition of 5'-nucleotide tags to both primers producing double-tagged amplicons and the use of multiple PCR replicates to filter erroneous sequences. However, there is currently no available toolkit for the straightforward analysis of datasets produced in this way. We present DAMe, a toolkit for the processing of datasets generated by double-tagged amplicons from multiple PCR replicates derived from an unlimited number of samples. Specifically, DAMe can be used to (i) sort amplicons by tag combination, (ii) evaluate PCR replicates dissimilarity, and (iii) filter sequences derived from sequencing/PCR errors, chimeras, and contamination. This is attained by calculating the following parameters: (i) sequence content similarity between the PCR replicates from each sample, (ii) reproducibility of each unique sequence across the PCR replicates, and (iii) copy number of the unique sequences in each PCR replicate. We showcase the insights that can be obtained using DAMe prior to taxonomic assignment, by applying it to two real datasets that vary in their complexity regarding number of samples, sequencing libraries, PCR replicates, and used tag combinations. Finally, we use a third mock dataset to demonstrate the impact and importance of filtering the sequences with DAMe. DAMe allows the user-friendly manipulation of amplicons derived from multiple samples with PCR replicates built in a single or multiple sequencing libraries. It allows the user to: (i) collapse amplicons into unique sequences and sort them by tag combination while retaining the sample identifier and copy number information, (ii) identify sequences carrying unused tag combinations, (iii) evaluate the comparability of PCR replicates of the same sample, and (iv) filter tagged amplicons from a number of PCR replicates using parameters of minimum length, copy number, and reproducibility across the PCR replicates. This enables an efficient analysis of complex datasets, and ultimately increases the ease of handling datasets from large-scale studies.
NASA Technical Reports Server (NTRS)
Seiff, Alvin; Wilkins, Max E.
1961-01-01
The aerodynamic characteristics of a hypersonic glider configuration, consisting of a slender ogive cylinder with three highly swept wings, spaced 120 apart, with the wing chord equal to the body length, were investigated experimentally at a Mach number of 6 and at Reynolds numbers from 6 to 16 million. The objectives were to evaluate the theoretical procedures which had been used to estimate the performance of the glider, and also to evaluate the characteristics of the glider itself. A principal question concerned the viscous drag at full-scale Reynolds number, there being a large difference between the total drags for laminar and turbulent boundary layers. It was found that the procedures which had been applied for estimating minimum drag, drag due to lift, lift curve slope, and center of pressure were generally accurate within 10 percent. An important exception was the non-linear contribution to the lift coefficient which had been represented by a Newtonian term. Experimentally, the lift curve was nearly linear within the angle-of-attack range up to 10 deg. This error affected the estimated lift-drag ratio. The minimum drag measurements indicated that substantial amounts of turbulent boundary layer were present on all models tested, over a range of surface roughness from 5 microinches maximum to 200 microinches maximum. In fact, the minimum drag coefficients were nearly independent of the surface smoothness and fell between the estimated values for turbulent and laminar boundary layers, but closer to the turbulent value. At the highest test Reynolds numbers and at large angles of attack, there was some indication that the skin friction of the rough models was being increased by the surface roughness. At full-scale Reynolds number, the maximum lift-drag ratio with a leading edge of practical diameter (from the standpoint of leading-edge heating) was 4.0. The configuration was statically and dynamically stable in pitch and yaw, and the center of pressure was less than 2-percent length ahead of the centroid of plan-form area.
Naska, Androniki; Valanou, Elisavet; Peppa, Eleni; Katsoulis, Michail; Barbouni, Anastasia; Trichopoulou, Antonia
2016-09-01
To evaluate how well respondents perceive digital images of food portions commonly consumed in Greece. The picture series was defined on the basis of usual dietary intakes assessed in earlier large-scale studies in Greece. The evaluation included 2218 pre-weighed actual portions shown to participants, who were subsequently asked to link each portion to a food picture. Mean differences between picture numbers selected and portions actually shown were compared using the Wilcoxon paired signed-rank test. The effect of personal characteristics on participants' selections was evaluated through unpaired t tests (sex and school years) or through Tukey-Kramer pairwise comparisons (age and food groups). Testing of participants' perception of digital food images used in the Greek national nutrition survey. Individuals (n 103, 61 % females) aged 12 years and over, selected on the basis of the target population of the Greek nutrition survey using convenience sampling. Individuals selected the correct or adjacent image in about 90 % of the assessments and tended to overestimate small and underestimate large quantities. Photographs of Greek traditional pies and meat-based pastry dishes led participants to perceive the amounts in the photos larger than they actually were. Adolescents were more prone to underestimating food quantities through the pictures. The digital food atlas appears generally suitable to be used for the estimation of average food intakes in large-scale dietary surveys in Greece. However, individuals who consistently consume only small or only large food portions may have biased perceptions in relation to others.
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
Further evaluation of a brief, intensive teacher-training model.
Lerman, Dorothea C; Tetreault, Allison; Hovanetz, Alyson; Strobel, Margaret; Garro, Joanie
2008-01-01
The purpose of this study was to further evaluate the outcomes of a model program that was designed to train current teachers of children with autism. Nine certified special education teachers participating in an intensive 5-day summer training program were taught a relatively large number of specific skills in two areas (preference assessment and direct teaching). The teachers met the mastery criteria for all of the skills during the summer training. Follow-up observations up to 6 months after training suggested that the skills generalized to their classrooms and were maintained for most teachers with brief feedback only.
Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations
NASA Technical Reports Server (NTRS)
Rubesin, M. W.
1975-01-01
A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.
Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.
2005-09-01
Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.
Retention Indices for Frequently Reported Compounds of Plant Essential Oils
NASA Astrophysics Data System (ADS)
Babushok, V. I.; Linstrom, P. J.; Zenkevich, I. G.
2011-12-01
Gas chromatographic retention indices were evaluated for 505 frequently reported plant essential oil components using a large retention index database. Retention data are presented for three types of commonly used stationary phases: dimethyl silicone (nonpolar), dimethyl silicone with 5% phenyl groups (slightly polar), and polyethylene glycol (polar) stationary phases. The evaluations are based on the treatment of multiple measurements with the number of data records ranging from about 5 to 800 per compound. Data analysis was limited to temperature programmed conditions. The data reported include the average and median values of retention index with standard deviations and confidence intervals.
The management of abdominal wall hernias – in search of consensus
Bury, Kamil; Śmietański, Maciej
2015-01-01
Introduction Laparoscopic repair is becoming an increasingly popular alternative in the treatment of abdominal wall hernias. In spite of numerous studies evaluating this technique, indications for laparoscopic surgery have not been established. Similarly, implant selection and fixation techniques have not been unified and are the subject of scientific discussion. Aim To assess whether there is a consensus on the management of the most common ventral abdominal wall hernias among recognised experts. Material and methods Fourteen specialists representing the boards of European surgical societies were surveyed to determine their choice of surgical technique for nine typical primary ventral and incisional hernias. The access method, type of operation, mesh prosthesis and fixation method were evaluated. In addition to the laparoscopic procedures, the number of tackers and their arrangement were assessed. Results In none of the cases presented was a consensus of experts obtained. Laparoscopic and open techniques were used equally often. Especially in the group of large hernias, decisions on repair methods were characterised by high variability. The technique of laparoscopic mesh fixation was a subject of great variability in terms of both method selection and the numbers of tackers and sutures used. Conclusions Recognised experts have not reached a consensus on the management of abdominal wall hernias. Our survey results indicate the need for further research and the inclusion of large cohorts of patients in the dedicated registries to evaluate the results of different surgical methods, which would help in the development of treatment algorithms for surgical education in the future. PMID:25960793
Hospital preparedness for Ebola virus disease: a training course in the Philippines
Carlos, Celia; Capistrano, Rowena; Tobora, Charissa Fay; delos Reyes, Mari Rose; Lupisan, Socorro; Corpuz, Aura; Aumentado, Charito; Suy, Lyndon Lee; Hall, Julie; Donald, Julian; Counahan, Megan; Curless, Melanie S; Rhymer, Wendy; Gavin, Melanie; Lynch, Chelsea; Black, Meredith A; Anduyon, Albert D; Buttner, Petra
2015-01-01
Objective To develop, teach and evaluate a training workshop that could rapidly prepare large numbers of health professionals working in hospitals in the Philippines to detect and safely manage Ebola virus disease (EVD). The strategy was to train teams (each usually with five members) of key health professionals from public, private and local government hospitals across the Philippines who could then guide Ebola preparedness in their hospitals. Methods The workshop was developed collaboratively by the Philippine Department of Health and the country office of the World Health Organization. It was evaluated using a pre- and post-workshop test and two evaluation forms. χ2 tests and linear regression analyses were conducted comparing pre- and post-workshop test results. Results A three-day workshop was developed and used to train 364 doctors, nurses and medical technologists from 78 hospitals across the Philippines in three initial batches. Knowledge about EVD increased significantly (P < 0.009) although knowledge on transmission remained suboptimal. Confidence in managing EVD increased significantly (P = 0.018) with 96% of participants feeling more prepared to safely manage EVD cases. Discussion: The three-day workshop to prepare hospital staff for EVD was effective at increasing the level of knowledge about EVD and the level of confidence in managing EVD safely. This workshop could be adapted for use as baseline training in EVD in other developing countries to prepare large numbers of hospital staff to rapidly detect, isolate and safely manage EVD cases. PMID:25960920
Imaging vascular implants with optical coherence tomography
NASA Astrophysics Data System (ADS)
Barton, Jennifer K.; Dal Ponte, Donny B.; Williams, Stuart K.; Ford, Bridget K.; Descour, Michael R.
2000-04-01
Vascular stents and grafts have many proven and promising clinical applications, but also a large number of complications. A focus of current research is the development of biocompatible implants. Evaluation of these devices generally requires a large number of animals due to the need for explanation and histological evaluation of the implant at several time intervals. It would be desirable to use instead a high resolution, in situ assessment method. An in vitro study was performed to determine if OCT could image cell proliferation and thrombus formation on vascular stents and grafts. First, images were taken of explanted stents. The implants were locate din peripheral vessels of a porcine model of atherosclerosis. The images clearly show the vessel response to initial damage, the materials of the implant, extent of intimal cell hyper proliferation, and small platelet aggregates. Next, a tissue engineered graft, which had been sodded with smooth muscle cells and incubated in a bioreactor, was evaluated. Cross-section images showed the pores of the polymer material and the layer of smooth muscle cells beginning to invade the graft material. For comparison, in vitro 20 MHz IVUS images of the same grafts were obtained. A catheter was designed for intravascular imaging. The 2.3 mm diameter catheter contains a fiber with GRIN lens and right angle prism, a monorail guidewire, and a novel positioning wire that can be protruded to push the catheter against the vessel wall, potentially eliminating the need for saline flush. Preliminary in vitro results with this catheter are encouraging.
Experimental and Computational Evaluation of Flush-Mounted, S-Duct Inlets
NASA Technical Reports Server (NTRS)
Berrier, Bobby L.; Allan, Brian G.
2004-01-01
A new high Reynolds number test capability for boundary layer ingesting inlets has been developed for the NASA Langley Research Center 0.3-Meter Transonic Cryogenic Tunnel. Using this new capability. an experimental investigation of four S-duct inlet configurations was conducted. A computational study of one of the inlets was also conducted using a Navier-Stokes solver. The objectives of this investigation were to: 1) develop a new high Reynolds number inlet test capability for flush-mounted inlets; 2) provide a database for CFD tool validation; 3) evaluate the performance of S-duct inlets with large amounts of boundary layer ingestion; and 4) provide a baseline inlet for future inlet flow-control studies. Tests were conducted at Mach numbers from 0.25 to 0.83. Reynolds numbers (based on duct exit diameter) from 5.1 million to a full-scale value of 13.9 million, and inlet mass-flow ratios from 0.39 to 1.58 depending on Mach number. Results of the experimental study indicate that inlet pressure recovery generally decreased and inlet distortion generally increased with increasing Mach number. Except at low Mach numbers, increasing inlet mass-flow increased pressure recovery and increased distortion. Increasing the amount of boundary layer ingestion or ingesting a boundary layer with a distorted profile decreased pressure recovery and increased distortion. Finally, increasing Reynolds number had almost no effect on inlet distortion but increased inlet recovery by about one-half percent at a Mach number near cruise. The computational results captured the inlet pressure recovery and distortion trends with Mach number and inlet mass-flow well: the reversal of the pressure recovery trend with increasing inlet mass-flow at low and high Mach numbers was predicted by CFD. However, CFD results were generally more pessimistic (larger losses) than measured experimentally.
NASA Technical Reports Server (NTRS)
Teague, E. C.; Vorburger, T. V.; Scire, F. E.; Baker, S. M.; Jensen, S. W.; Gloss, B. B.; Trahan, C.
1982-01-01
Current work by the National Bureau of Standards at the NASA National Transonic Facility (NTF) to evaluate the performance of stylus instruments for determining the topography of models under investigation is described along with instrumentation for characterization of the surface microtopography. Potential areas of surface effects are reviewed, and the need for finer surfaced models for the NTF high Reynolds number flows is stressed. Current stylus instruments have a radii as large as 25 microns, and three models with surface finishes of 4-6, 8-10, and 12-15 micro-in. rms surface finishes were fabricated for tests with a stylus with a tip radius of 1 micron and a 50 mg force. Work involving three-dimensional stylus profilometry is discussed in terms of stylus displacement being converted to digital signals, and the design of a light scattering instrument capable of measuring the surface finish on curved objects is presented.
Effects of bovine necrotic vulvovaginitis on productivity in a dairy herd in Israel.
Blum, S; Mazuz, M; Brenner, J; Friedgut, O; Koren, O; Goshen, T; Elad, D
2008-05-01
Bovine necrotic vulvovaginitis (BNVV) is characterized by the development of a necrotic vulvovaginal lesion, almost exclusively in post-parturient first-lactation cows, associated with Porphyromonas levii. The scope of this survey was to evaluate the impact of BNVV on herd productivity as a means to rationally evaluate the resources that should be allocated in dealing with the syndrome. During an outbreak of BNVV in a dairy herd, following the introduction of a large number of cows from another farm, the impact of the animals' origin (local or transferred) and BNVV (positive or negative) upon involuntary culling rate, milk yield and days between pregnancies were assessed. The results indicated that the number of days between pregnancies was significantly higher in first-lactation cows with BNVV but was not influenced by the other independent variables. None of the other variables included in this survey had any effect on the involuntary culling rate and milk yield.
Biochemical markers in the assessment of bone disease
NASA Technical Reports Server (NTRS)
Bikle, D. D.
1997-01-01
As the mean age of our population increases, increasing attention has been paid to the diseases associated with aging, including diseases of the skeleton such as osteoporosis. Effective means of treating and possibly preventing such skeletal disorders are emerging, making their early recognition an important goal for the primary care physician. Although bone density measurements and skeletal imaging studies remain of primary diagnostic importance in this regard, a large number of assays for biochemical markers of bone formation and resorption are being developed that promise to complement the densitometry measurements and imaging studies, providing an assessment of the rates of bone turnover and an earlier evaluation of the effects of therapy. In this review, emphasizing the recent literature, the major biochemical markers currently in use or under active investigation are described, and their application in a number of diseases of the skeleton including osteoporosis is evaluated.
Wood, Paul L
2014-01-01
Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed. PMID:23842599
Wood, Paul L
2014-01-01
Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed.
Galeotti, Angela; Garret Bernardin, Annelyse; D'Antò, Vincenzo; Ferrazzano, Gianmaria Fabrizio; Gentile, Tina; Viarani, Valeria; Cassabgi, Giorgio; Cantile, Tiziana
2016-01-01
Aim . To evaluate the effectiveness and the tolerability of the nitrous oxide sedation for dental treatment on a large pediatric sample constituting precooperative, fearful, and disabled patients. Methods . 472 noncooperating patients (aged 4 to 17) were treated under conscious sedation. The following data were calculated: average age; gender distribution; success/failure; adverse effects; number of treatments; kind of dental procedure undertaken; number of dental procedures for each working session; number of working sessions for each patient; differences between males and females and between healthy and disabled patients in relation to success; success in relation to age; and level of cooperation using Venham score. Results . 688 conscious sedations were carried out. The success was 86.3%. Adverse effects occurred in 2.5%. 1317 dental procedures were performed. In relation to the success, there was a statistically significant difference between healthy and disabled patients. Sex and age were not significant factors for the success. Venham score was higher at the first contact with the dentist than during the treatment. Conclusions . Inhalation conscious sedation represented an effective and safe method to obtain cooperation, even in very young patients, and it could reduce the number of pediatric patients referred to hospitals for general anesthesia.
Galeotti, Angela; Garret Bernardin, Annelyse; D'Antò, Vincenzo; Viarani, Valeria; Cassabgi, Giorgio
2016-01-01
Aim. To evaluate the effectiveness and the tolerability of the nitrous oxide sedation for dental treatment on a large pediatric sample constituting precooperative, fearful, and disabled patients. Methods. 472 noncooperating patients (aged 4 to 17) were treated under conscious sedation. The following data were calculated: average age; gender distribution; success/failure; adverse effects; number of treatments; kind of dental procedure undertaken; number of dental procedures for each working session; number of working sessions for each patient; differences between males and females and between healthy and disabled patients in relation to success; success in relation to age; and level of cooperation using Venham score. Results. 688 conscious sedations were carried out. The success was 86.3%. Adverse effects occurred in 2.5%. 1317 dental procedures were performed. In relation to the success, there was a statistically significant difference between healthy and disabled patients. Sex and age were not significant factors for the success. Venham score was higher at the first contact with the dentist than during the treatment. Conclusions. Inhalation conscious sedation represented an effective and safe method to obtain cooperation, even in very young patients, and it could reduce the number of pediatric patients referred to hospitals for general anesthesia. PMID:27747238
Dual-Level Method for Estimating Multistructural Partition Functions with Torsional Anharmonicity.
Bao, Junwei Lucas; Xing, Lili; Truhlar, Donald G
2017-06-13
For molecules with multiple torsions, an accurate evaluation of the molecular partition function requires consideration of multiple structures and their torsional-potential anharmonicity. We previously developed a method called MS-T for this problem, and it requires an exhaustive conformational search with frequency calculations for all the distinguishable conformers; this can become expensive for molecules with a large number of torsions (and hence a large number of structures) if it is carried out with high-level methods. In the present work, we propose a cost-effective method to approximate the MS-T partition function when there are a large number of structures, and we test it on a transition state that has eight torsions. This new method is a dual-level method that combines an exhaustive conformer search carried out by a low-level electronic structure method (for instance, AM1, which is very inexpensive) and selected calculations with a higher-level electronic structure method (for example, density functional theory with a functional that is suitable for conformational analysis and thermochemistry). To provide a severe test of the new method, we consider a transition state structure that has 8 torsional degrees of freedom; this transition state structure is formed along one of the reaction pathways of the hydrogen abstraction reaction (at carbon-1) of ketohydroperoxide (KHP; its IUPAC name is 4-hydroperoxy-2-pentanone) by OH radical. We find that our proposed dual-level method is able to significantly reduce the computational cost for computing MS-T partition functions for this test case with a large number of torsions and with a large number of conformers because we carry out high-level calculations for only a fraction of the distinguishable conformers found by the low-level method. In the example studied here, the dual-level method with 40 high-level optimizations (1.8% of the number of optimizations in a coarse-grained full search and 0.13% of the number of optimizations in a fine-grained full search) reproduces the full calculation of the high-level partition function within a factor of 1.0 to 2.0 from 200 to 1000 K. The error in the dual-level method can be further reduced to factors of 0.6 to 1.1 over the whole temperature interval from 200 to 2400 K by optimizing 128 structures (5.9% of the number of optimizations in a fine-grained full search and 0.41% of the number of optimizations in a fine-grained full search). These factor-of-two or better errors are small compared to errors up to a factor of 1.0 × 10 3 if one neglects multistructural effects for the case under study.
Experimental observation of a large low-frequency band gap in a polymer waveguide
NASA Astrophysics Data System (ADS)
Miniaci, Marco; Mazzotti, Matteo; Radzieński, Maciej; Kherraz, Nesrine; Kudela, Pawel; Ostachowicz, Wieslaw; Morvan, Bruno; Bosia, Federico; Pugno, Nicola M.
2018-02-01
The quest for large and low frequency band gaps is one of the principal objectives pursued in a number of engineering applications, ranging from noise absorption to vibration control, to seismic wave abatement. For this purpose, a plethora of complex architectures (including multi-phase materials) and multi-physics approaches have been proposed in the past, often involving difficulties in their practical realization. To address this issue, in this work we propose an easy-to-manufacture design able to open large, low frequency complete Lamb band gaps exploiting a suitable arrangement of masses and stiffnesses produced by cavities in a monolithic material. The performance of the designed structure is evaluated by numerical simulations and confirmed by Scanning Laser Doppler Vibrometer (SLDV) measurements on an isotropic polyvinyl chloride plate in which a square ring region of cross-like cavities is fabricated. The full wave field reconstruction clearly confirms the ability of even a limited number of unit cell rows of the proposed design to efficiently attenuate Lamb waves. In addition, numerical simulations show that the structure allows to shift of the central frequency of the BG through geometrical modifications. The design may be of interest for applications in which large BGs at low frequencies are required.
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
Convergent radial dispersion: A note on evaluation of the Laplace transform solution
Moench, Allen F.
1991-01-01
A numerical inversion algorithm for Laplace transforms that is capable of handling rapid changes in the computed function is applied to the Laplace transform solution to the problem of convergent radial dispersion in a homogeneous aquifer. Prior attempts by the author to invert this solution were unsuccessful for highly advective systems where the Peclet number was relatively large. The algorithm used in this note allows for rapid and accurate inversion of the solution for all Peclet numbers of practical interest, and beyond. Dimensionless breakthrough curves are illustrated for tracer input in the form of a step function, a Dirac impulse, or a rectangular input.
Large χ(3) of squarylium dye J aggregates measured using the Z-scan technique
NASA Astrophysics Data System (ADS)
Tatsuura, Satoshi; Wada, Osamu; Tian, Minquan; Furuki, Makoto; Sato, Yasuhiro; Iwasa, Izumi; Pu, Lyong Sun; Kawashima, Hitoshi
2001-10-01
Third-order nonlinear optical coefficients χ(3) were measured for the J aggregates of two types of squarylium dye derivatives at resonant and near-resonant wavelengths by using the Z-scan technique. The maximum χ(3) value evaluated at one-photon resonance was 2.9×10-6 e.s.u., which was greater than that of phthalocyanines by 4 orders of magnitude. χ(3) for one squarylium derivative was approximately two times as large as that of the other. This can be attributed to the difference of the number of molecules contributing to a coherent state in each J aggregate.
Coleman, Craig I; Schlesselman, Lauren S; Lao, Eang; White, C Michael
2007-06-15
To evaluate the quantity and quality of published literature conducted by pharmacy practice faculty members in US colleges and schools of pharmacy for the years 2001-2003. The Web of Science bibliographic database was used to identify publication citations for the years 2001-2003, which were then evaluated in a number of different ways. Faculty members were identified using American Association of Colleges of Pharmacy rosters for the 2000-2001, 2001-2002, and 2002-2003 academic years. Two thousand three hundred seventy-four pharmacy practice faculty members generated 1,896 publications in Web of Science searchable journals. A small number of faculty members (2.1%) were responsible for a large proportion of publications (30.6%), and only 4.9% of faculty members published 2 or more publications in these journals per year. The average impact factor for the top 200 publications was 7.6. Pharmacy practice faculty members contributed substantially to the biomedical literature and their work has had an important impact. A substantial portion of this work has come from a small subset of faculty members.
2007-08-01
Aluminum - +- - - Viton + + + _ . Silicone .... Polyimide (Kapton) + . _ . 81 - Apex .... B1 - Stens .... 21 3.5.5 Enumerated Coupon Results. The first...Vaporous Hydrogen Peroxide mVHP B. anthracis Silicone G. stearothermophilus CARC Metal 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER OF 19a...aircraft, vehicles, protective- and sensitive-equipment that encompass a variety of material properties, compositions and porosities. The test
2011-08-31
2011 4 . TITLE AND SUBTITLE Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms 5a. CONTRACT...large numbers of hashed passwords (Booz Allen Hamilton, HBGary, Gawker, Sony Playstation , etc.), coupled with the availability of botnets that offer...when evaluating the strength of different password-composition policies. 4 . We investigate the effectiveness of entropy as a measure of password
ERIC Educational Resources Information Center
Cameron, Lisa A.
This paper uses regression and matching techniques to evaluate Indonesia's Social Safety Net Scholarships Programme. The scholarships program was developed to try to prevent large numbers of children from dropping out of school as a result of the Asian financial crisis. The expectation was that many families would find it difficult to keep their…
ERIC Educational Resources Information Center
Szadokierski, Isadora; Burns, Matthew K.
2008-01-01
Drill procedures have been used to increase the retention of various types of information, but little is known about the causal mechanisms of these techniques. The current study compared the effect of two key features of drill procedures, a large number of opportunities to respond (OTR) and a drill ratio that maintains a high percentage of known…
ERIC Educational Resources Information Center
Gelkopf, Marc; Berger, Rony
2009-01-01
Background: Since September 2000 Israeli children have been exposed to a large number of terrorist attacks. A universal, school-based intervention for dealing with the threat of terrorism as well as with terror-related symptoms, ERASE-Stress (ES), was evaluated in a male religious middle school in southern Israel. The program was administered by…
The Impulse of Class Tutoring Activities Evaluated in the Light of Foreign Language Teaching Methods
ERIC Educational Resources Information Center
Erdogu Yilmaz, Sule
2017-01-01
Teaching Turkish as a foreign language (TTFL) has recently gained much importance in modern life. For some reason, a large number of people and students with dissimilar background come from other countries so as to start off a new life primarily in Istanbul and/or many other cities in Turkey. Many of whom need to ensure their arrival and long term…
ERIC Educational Resources Information Center
Boatman, Angela
2012-01-01
Large numbers of students who attend college each year are required to enroll in remedial programs aimed at enhancing their weak reading, writing, and/or mathematical skills and helping to prepare them for success in college-level courses. Recently, a host of new course innovations have surfaced that are intended to move students through…
On-Line Data Reconstruction in Redundant Disk Arrays.
1994-05-01
each sale, - file servers that support a large number of clients with differing work schedules , and * automated teller networks in banking systems...24KB Head scheduling : FIFO User data layout: Sequential in address space of array Disk spindles: Synchronized Table 2.2: Default array parameters for...package and a set of scheduling and queueing routines. 2.3.3. Default workload This dissertation reports on many performance evaluations. In order to
Operational Evaluation of the Rapid Viability PCR Method for ...
Journal Article This research work has a significant impact on the use of the RV-PCR method to analyze post-decontamination environmental samples during an anthrax event. The method has shown 98% agreement with the traditional culture based method. With such a success, this method, upon validation, will significantly increase the laboratory throughput/capacity to analyze a large number of anthrax event samples in a relatively short time.
Lessons Learned from Comprehensive Energy and Water Evaluations at U.S. Army Campus Installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodward, James C.; Dahowski, Robert T.
The U.S. Department of Energy’s Pacific Northwest National Laboratory has engaged in a multi-year collaboration with the U.S. Army to support comprehensive energy and water evaluations at large Army campus installations. Initiated to help the Army achieve compliance with facility evaluation requirements dictated by the Energy Independence and Security Act of 2007, this effort has resulted in the evaluation of 530 buildings at 14 installations across the U.S., and the identification of annual savings potential of over 212 billion Btu of energy and 29 million gallons of water. This paper highlights the nature of the evaluation process applied at thesemore » Army installations and discusses a number of key findings that can be considered for ongoing and future evaluations at Army and other federal agency facilities, particularly those buildings within campus settings.« less
Payne-Sturges, Devon; Kemp, Debra
2008-01-01
Background Executive Order (EO) 13045, Protection of Children From Environmental Health Risks and Safety Risks, directs each federal agency to ensure that its policies, programs, activities, and standards address disproportionate environmental health and safety risks to children. Objectives We reviewed regulatory actions published by U.S. Environmental Protection Agency (EPA) in the Federal Register from April 1998 through December 2006 to evaluate applicability of EO 13045 to U.S. EPA actions and consideration of children’s health issues in U.S. EPA rulemakings. Discussion Although virtually all actions discussed EO 13045, fewer than two regulations per year, on average, were subject to the EO requirement to evaluate children’s environmental health risks. Nonetheless, U.S. EPA considered children’s environmental health in all actions addressing health or safety risks that may disproportionately affect children. Conclusion The EO does not apply to a broad enough set of regulatory actions to ensure protection of children’s health and safety risks, largely because of the small number of rules that are economically significant. However, given the large number of regulations that consider children’s health issues despite not being subject to the EO, other statutory requirements and agency policies reach a larger set of regulations to ensure protection of children’s environmental health. PMID:19079726
Satellite Relay Telemetry of Seismic Data in Earthquake Prediction and Control
NASA Technical Reports Server (NTRS)
Jackson, W. H.; Eaton, J. P.
1971-01-01
The Satellite Telemetry Earthquake Monitoring Program was started to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project. The principal advantages of the satellite relay system over commercial telephone or microwave systems were: (1) it could be made less prone to massive failure during a major earthquake; (2) it could be extended readily into undeveloped regions; and (3) it could provide flexible, uniform communications over large sections of major global tectonic zones. Fundamental characteristics of a communications system to cope with the large volume of raw data collected by a short-period seismograph network are discussed.
Honeck, Patrick; Michel, Maurice Stephan; Trojan, Lutz; Alken, Peter
2009-02-01
Despite the large number of surgical techniques for continent cutaneous diversion described in literature, the creation of a reliable, continent and easily catheterizable continence mechanism remains a complex surgical procedure. Aim of this study was the evaluation of a new method for a catheterizable continence mechanism using stapled pig intestine. Small and large pig intestines were used for construction. A 3 or 6 cm double row stapling system was used. Three variations using small and large intestine segments were constructed. A 3 or 6 cm long stapler line was placed alongside a 12 Fr catheter positioned at the antimesenterial side creating a partially two-luminal segment. Construction time for the tube was measured. The created tube was then embedded into the pouch. Pressure evaluation of the continence mechanism was performed for each variation. Intermittent external manual compression was used to simulate sudden pressure exposure. All variations were 100% continent under filling volumes of up to 700 ml and pressure levels of 58 +/- 6 cm H(2)O for large intestine and 266 ml and 87 +/- 18 cm H(2)O for small intestine, respectively. With further filling above the mentioned capacity suture insufficiency occurred but no tube insufficiency. Construction time for all variations was less than 12 min. The described technique is an easy and fast method to construct a continence mechanism using small or large intestine. Our ex vivo experiments have shown sufficient continence situation in an ex-vivo model. Further investigations in an in-vivo model are needed to confirm these results.
Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers
NASA Technical Reports Server (NTRS)
Balasubramaniam, R.; Subramanian, R. S.
1996-01-01
The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.
Implementation and evaluation of a community-based interprofessional learning activity.
Luebbers, Ellen L; Dolansky, Mary A; Vehovec, Anton; Petty, Gayle
2017-01-01
Implementation of large-scale, meaningful interprofessional learning activities for pre-licensure students has significant barriers and requires novel approaches to ensure success. To accomplish this goal, faculty at Case Western Reserve University, Ohio, USA, used the Ottawa Model of Research Use (OMRU) framework to create, improve, and sustain a community-based interprofessional learning activity for large numbers of medical students (N = 177) and nursing students (N = 154). The model guided the process and included identification of context-specific barriers and facilitators, continual monitoring and improvement using data, and evaluation of student learning outcomes as well as programme outcomes. First year Case Western Reserve University medical students and undergraduate nursing students participated in team-structured prevention screening clinics in the Cleveland Metropolitan Public School District. Identification of barriers and facilitators assisted with overcoming logistic and scheduling issues, large class size, differing ages and skill levels of students and creating sustainability. Continual monitoring led to three distinct phases of improvement and resulted in the creation of an authentic team structure, role clarification, and relevance for students. Evaluation of student learning included both qualitative and quantitative methods, resulting in statistically significant findings and qualitative themes of learner outcomes. The OMRU implementation model provided a useful framework for successful implementation resulting in a sustainable interprofessional learning activity.
An individual-based model for population viability analysis of humpback chub in Grand Canyon
Pine, William Pine; Healy, Brian; Smith, Emily Omana; Trammell, Melissa; Speas, Dave; Valdez, Rich; Yard, Mike; Walters, Carl; Ahrens, Rob; Vanhaverbeke, Randy; Stone, Dennis; Wilson, Wade
2013-01-01
We developed an individual-based population viability analysis model (females only) for evaluating risk to populations from catastrophic events or conservation and research actions. This model tracks attributes (size, weight, viability, etc.) for individual fish through time and then compiles this information to assess the extinction risk of the population across large numbers of simulation trials. Using a case history for the Little Colorado River population of Humpback Chub Gila cypha in Grand Canyon, Arizona, we assessed extinction risk and resiliency to a catastrophic event for this population and then assessed a series of conservation actions related to removing specific numbers of Humpback Chub at different sizes for conservation purposes, such as translocating individuals to establish other spawning populations or hatchery refuge development. Our results suggested that the Little Colorado River population is generally resilient to a single catastrophic event and also to removals of larvae and juveniles for conservation purposes, including translocations to establish new populations. Our results also suggested that translocation success is dependent on similar survival rates in receiving and donor streams and low emigration rates from recipient streams. In addition, translocating either large numbers of larvae or small numbers of large juveniles has generally an equal likelihood of successful population establishment at similar extinction risk levels to the Little Colorado River donor population. Our model created a transparent platform to consider extinction risk to populations from catastrophe or conservation actions and should prove useful to managers assessing these risks for endangered species such as Humpback Chub.
Lambertini, Elisabetta; Spencer, Susan K.; Bertz, Phillip D.; Loge, Frank J.; Kieke, Burney A.; Borchardt, Mark A.
2008-01-01
Available filtration methods to concentrate waterborne viruses are either too costly for studies requiring large numbers of samples, limited to small sample volumes, or not very portable for routine field applications. Sodocalcic glass wool filtration is a cost-effective and easy-to-use method to retain viruses, but its efficiency and reliability are not adequately understood. This study evaluated glass wool filter performance to concentrate the four viruses on the U.S. Environmental Protection Agency contaminant candidate list, i.e., coxsackievirus, echovirus, norovirus, and adenovirus, as well as poliovirus. Total virus numbers recovered were measured by quantitative reverse transcription-PCR (qRT-PCR); infectious polioviruses were quantified by integrated cell culture (ICC)-qRT-PCR. Recovery efficiencies averaged 70% for poliovirus, 14% for coxsackievirus B5, 19% for echovirus 18, 21% for adenovirus 41, and 29% for norovirus. Virus strain and water matrix affected recovery, with significant interaction between the two variables. Optimal recovery was obtained at pH 6.5. No evidence was found that water volume, filtration rate, and number of viruses seeded influenced recovery. The method was successful in detecting indigenous viruses in municipal wells in Wisconsin. Long-term continuous filtration retained viruses sufficiently for their detection for up to 16 days after seeding for qRT-PCR and up to 30 days for ICC-qRT-PCR. Glass wool filtration is suitable for large-volume samples (1,000 liters) collected at high filtration rates (4 liters min−1), and its low cost makes it advantageous for studies requiring large numbers of samples. PMID:18359827
Characterizing the Discussion of Antibiotics in the Twittersphere: What is the Bigger Picture?
Kendra, Rachel Lynn; Karki, Suman; Eickholt, Jesse Lee; Gandy, Lisa
2015-06-19
User content posted through Twitter has been used for biosurveillance, to characterize public perception of health-related topics, and as a means of distributing information to the general public. Most of the existing work surrounding Twitter and health care has shown Twitter to be an effective medium for these problems but more could be done to provide finer and more efficient access to all pertinent data. Given the diversity of user-generated content, small samples or summary presentations of the data arguably omit a large part of the virtual discussion taking place in the Twittersphere. Still, managing, processing, and querying large amounts of Twitter data is not a trivial task. This work describes tools and techniques capable of handling larger sets of Twitter data and demonstrates their use with the issue of antibiotics. This work has two principle objectives: (1) to provide an open-source means to efficiently explore all collected tweets and query health-related topics on Twitter, specifically, questions such as what users are saying and how messages are spread, and (2) to characterize the larger discourse taking place on Twitter with respect to antibiotics. Open-source software suites Hadoop, Flume, and Hive were used to collect and query a large number of Twitter posts. To classify tweets by topic, a deep network classifier was trained using a limited number of manually classified tweets. The particular machine learning approach used also allowed the use of a large number of unclassified tweets to increase performance. Query-based analysis of the collected tweets revealed that a large number of users contributed to the online discussion and that a frequent topic mentioned was resistance. A number of prominent events related to antibiotics led to a number of spikes in activity but these were short in duration. The category-based classifier developed was able to correctly classify 70% of manually labeled tweets (using a 10-fold cross validation procedure and 9 classes). The classifier also performed well when evaluated on a per category basis. Using existing tools such as Hive, Flume, Hadoop, and machine learning techniques, it is possible to construct tools and workflows to collect and query large amounts of Twitter data to characterize the larger discussion taking place on Twitter with respect to a particular health-related topic. Furthermore, using newer machine learning techniques and a limited number of manually labeled tweets, an entire body of collected tweets can be classified to indicate what topics are driving the virtual, online discussion. The resulting classifier can also be used to efficiently explore collected tweets by category and search for messages of interest or exemplary content.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
NASA Astrophysics Data System (ADS)
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
An Efficient Conflict Detection Algorithm for Packet Filters
NASA Astrophysics Data System (ADS)
Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung
Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life
NASA Astrophysics Data System (ADS)
Tinungki, Georgina Maria
2018-03-01
The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.
Conformity of commercial oral single solid unit dose packages in hospital pharmacy practice.
Thibault, Maxime; Prot-Labarthe, Sonia; Bussières, Jean-François; Lebel, Denis
2008-06-01
There are limited published data on the labelling of single unit dose packages in hospitals. The study was conducted in three large hospitals (two adult and one paediatric) in the metropolitan Montreal area, Quebec, Canada. The objective is to evaluate the labelling of commercial oral single solid unit dose packages available in Canadian urban hospital pharmacy practice. The study endpoint was the labelling conformity of each unit dose package for each criterion and overall for each manufacturer. Complete labelling of unit dose packages should include the following information: (1) brand name, (2) international non-proprietary name or generic name, (3) dosage, (4) pharmaceutical form, (5) manufacturer's name, (6) expiry date, (7) batch number and (8) drug identification number. We also evaluated the ease with which a single unit dose package is detached from a multiple unit dose package for quick, easy and safe use by pharmacy staff. Conformity levels were compared between brand-name and generic packages. A total of 124 different unit dose packages were evaluated. The level of conformity of each criterion varied between 19 and 50%. Only 43% of unit dose packages provided an easy-to-detach system for single doses. Among the 14 manufacturers with three or more unit dose packages evaluated, eight (57%) had a conformity level less than 50%. This study describes the conformity of commercial oral single solid unit dose packages in hospital pharmacy practice in Quebec. A large proportion of unit dose packages do not conform to a set of nine criteria set out in the guidelines of the American Society of Health-System Pharmacists and the Canadian Society of Hospital Pharmacists.
Xiao, Li; Wei, Hui; Himmel, Michael E.; Jameel, Hasan; Kelley, Stephen S.
2014-01-01
Optimizing the use of lignocellulosic biomass as the feedstock for renewable energy production is currently being developed globally. Biomass is a complex mixture of cellulose, hemicelluloses, lignins, extractives, and proteins; as well as inorganic salts. Cell wall compositional analysis for biomass characterization is laborious and time consuming. In order to characterize biomass fast and efficiently, several high through-put technologies have been successfully developed. Among them, near infrared spectroscopy (NIR) and pyrolysis-molecular beam mass spectrometry (Py-mbms) are complementary tools and capable of evaluating a large number of raw or modified biomass in a short period of time. NIR shows vibrations associated with specific chemical structures whereas Py-mbms depicts the full range of fragments from the decomposition of biomass. Both NIR vibrations and Py-mbms peaks are assigned to possible chemical functional groups and molecular structures. They provide complementary information of chemical insight of biomaterials. However, it is challenging to interpret the informative results because of the large amount of overlapping bands or decomposition fragments contained in the spectra. In order to improve the efficiency of data analysis, multivariate analysis tools have been adapted to define the significant correlations among data variables, so that the large number of bands/peaks could be replaced by a small number of reconstructed variables representing original variation. Reconstructed data variables are used for sample comparison (principal component analysis) and for building regression models (partial least square regression) between biomass chemical structures and properties of interests. In this review, the important biomass chemical structures measured by NIR and Py-mbms are summarized. The advantages and disadvantages of conventional data analysis methods and multivariate data analysis methods are introduced, compared and evaluated. This review aims to serve as a guide for choosing the most effective data analysis methods for NIR and Py-mbms characterization of biomass. PMID:25147552
González-Recio, O; Jiménez-Montero, J A; Alenda, R
2013-01-01
In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy and bias. This modification may be used to speed the calculus of genome-assisted evaluation in large data sets such us those obtained from consortiums. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J
2017-05-11
Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p < 0.001). Quartile of treatment starts was significantly associated with the number of strategies endorsed (p < 0.01), with the top quartile endorsing a median of 33 strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation strategies were able to deliver more evidence-based treatment in HCV. The current assessment also demonstrates the feasibility of electronic self-reporting to evaluate ERIC strategies on a large scale. These results provide initial evidence for the clinical relevance of the ERIC strategies in a real-world implementation setting on a large scale. This is an initial step in identifying which strategies are associated with the uptake of evidence-based practices in nationwide healthcare systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Dagle, Jeffery E.
2008-07-31
The infrastructure of phasor measurements have evolved over the last two decades from isolated measurement units to networked measurement systems with footprints beyond individual utility companies. This is, to a great extent, a bottom-up self-evolving process except some local systems built by design. Given the number of phasor measurement units (PMUs) in the system is small (currently 70 each in western and eastern interconnections), current phasor network architecture works just fine. However, the architecture will become a bottleneck when large number of PMUs are installed (e.g. >1000~10000). The need for phasor architecture design has yet to be addressed. This papermore » reviews the current phasor networks and investigates future architectures, as related to the efforts undertaken by the North America SynchroPhasor Initiative (NASPI). Then it continues to present staged system tests to evaluate the performance of phasor networks, which is a common practice in the Western Electricity Coordinating Council (WECC) system. This is followed by field measurement evaluation and the implication of phasor quality issues on phasor applications.« less
Wang, Junsheng; Fan, Zhiqiang; Zhao, Yile; Song, Younan; Chu, Hui; Song, Wendong; Song, Yongxin; Pan, Xinxiang; Sun, Yeqing; Li, Dongqing
2016-03-17
Space radiation brings uneven damages to cells. The detection of the distribution of cell damage plays a very important role in radiation medicine and the related research. In this paper, a new hand-held microfluidic flow cytometer was developed to evaluate the degree of radiation damage of cells. The device we propose overcomes the shortcomings (e.g., large volume and high cost) of commercial flow cytometers and can evaluate the radiation damage of cells accurately and quickly with potential for onsite applications. The distribution of radiation-damaged cells is analyzed by a simultaneous detection of immunofluorescence intensity of γ-H2AX and resistance pulse sensor (RPS) signal. The γ-H2AX fluorescence intensity provides information of the degree of radiation damage in cells. The ratio of the number of cells with γ-H2AX fluorescence signals to the total numbers of cells detected by RPS indicates the percentage of the cells that are damaged by radiation. The comparison experiment between the developed hand-held microfluidic flow cytometer and a commercial confocal microscope indicates a consistent and comparable detection performance.
NASA Astrophysics Data System (ADS)
Wang, Junsheng; Fan, Zhiqiang; Zhao, Yile; Song, Younan; Chu, Hui; Song, Wendong; Song, Yongxin; Pan, Xinxiang; Sun, Yeqing; Li, Dongqing
2016-03-01
Space radiation brings uneven damages to cells. The detection of the distribution of cell damage plays a very important role in radiation medicine and the related research. In this paper, a new hand-held microfluidic flow cytometer was developed to evaluate the degree of radiation damage of cells. The device we propose overcomes the shortcomings (e.g., large volume and high cost) of commercial flow cytometers and can evaluate the radiation damage of cells accurately and quickly with potential for onsite applications. The distribution of radiation-damaged cells is analyzed by a simultaneous detection of immunofluorescence intensity of γ-H2AX and resistance pulse sensor (RPS) signal. The γ-H2AX fluorescence intensity provides information of the degree of radiation damage in cells. The ratio of the number of cells with γ-H2AX fluorescence signals to the total numbers of cells detected by RPS indicates the percentage of the cells that are damaged by radiation. The comparison experiment between the developed hand-held microfluidic flow cytometer and a commercial confocal microscope indicates a consistent and comparable detection performance.
The use of citation indicators to identify and support high-quality research in Poland.
Pilc, Andrzej
2008-01-01
In large, mostly English-speaking countries, where the "critical mass" of scientists working in different subfields of science is achieved, the peer review system may be sufficient to assess the quality of scientific research. However, in smaller countries, outside the Anglo-American circle, it is important to introduce different systems to identify research of high quality. In Poland, a parametric system for assessing the quality of research has been introduced. It was largely based on the impact factor of scientific journals. While the use of this indicator to assess research quality is highly questionable, the implementation of the system in the Polish reality is even worse. Therefore it is important to change and improve the system currently used by the Ministry of Science and Higher Education to both evaluate and, more importantly, finance science in Poland. Here, a system based on three factors, i.e. the impact factor, the institutional h-index, and the institutional number of citations, is proposed. The scientific quality of institutions in Division VI: Medical Sciences of the Polish Academy of Sciences were evaluated and the results were compared with the existing system. Moreover, a method to identify high-quality researchers and institutions at the national level based on the quantity of highly cited papers is shown. Additionally, an attempt to identify the highest quality Polish research on an international level is proposed. This is based on the number of individual citations, the individual h-index, the number of publications, and the priority of the discovery.
NASA Astrophysics Data System (ADS)
Lal, Mohan; Mishra, S. K.; Pandey, Ashish; Pandey, R. P.; Meena, P. K.; Chaudhary, Anubhav; Jha, Ranjit Kumar; Shreevastava, Ajit Kumar; Kumar, Yogendra
2017-01-01
The Soil Conservation Service curve number (SCS-CN) method, also known as the Natural Resources Conservation Service curve number (NRCS-CN) method, is popular for computing the volume of direct surface runoff for a given rainfall event. The performance of the SCS-CN method, based on large rainfall (P) and runoff (Q) datasets of United States watersheds, is evaluated using a large dataset of natural storm events from 27 agricultural plots in India. On the whole, the CN estimates from the National Engineering Handbook (chapter 4) tables do not match those derived from the observed P and Q datasets. As a result, the runoff prediction using former CNs was poor for the data of 22 (out of 24) plots. However, the match was little better for higher CN values, consistent with the general notion that the existing SCS-CN method performs better for high rainfall-runoff (high CN) events. Infiltration capacity (fc) was the main explanatory variable for runoff (or CN) production in study plots as it exhibited the expected inverse relationship between CN and fc. The plot-data optimization yielded initial abstraction coefficient (λ) values from 0 to 0.659 for the ordered dataset and 0 to 0.208 for the natural dataset (with 0 as the most frequent value). Mean and median λ values were, respectively, 0.030 and 0 for the natural rainfall-runoff dataset and 0.108 and 0 for the ordered rainfall-runoff dataset. Runoff estimation was very sensitive to λ and it improved consistently as λ changed from 0.2 to 0.03.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Hope, Thomas A; Afshar-Oromieh, Ali; Eiber, Matthias; Emmett, Louise; Fendler, Wolfgang P; Lawhn-Heath, Courtney; Rowe, Steven P
2018-06-27
The purpose of this article is to describe the large number of radiotracers being evaluated for prostate-specific membrane antigen (PSMA) PET, which is becoming a central tool in the staging of prostate cancer. PSMA PET is a highly promising modality for the staging of prostate cancer because of its higher detection rate compared with that of conventional imaging. Both PET/CT and PET/MRI offer benefits with PSMA radiotracers, and PSMA PET findings frequently lead to changes in management. It is imperative that subsequent treatment changes be evaluated to show improved outcomes. PSMA PET also has potential applications, including patient selection for PSMA-based radioligand therapy and evaluation of treatment response.
Bredfeldt, Christine E; Butani, Amy; Padmanabhan, Sandhyasree; Hitz, Paul; Pardee, Roy
2013-03-22
Multi-site health sciences research is becoming more common, as it enables investigation of rare outcomes and diseases and new healthcare innovations. Multi-site research usually involves the transfer of large amounts of research data between collaborators, which increases the potential for accidental disclosures of protected health information (PHI). Standard protocols for preventing release of PHI are extremely vulnerable to human error, particularly when the shared data sets are large. To address this problem, we developed an automated program (SAS macro) to identify possible PHI in research data before it is transferred between research sites. The macro reviews all data in a designated directory to identify suspicious variable names and data patterns. The macro looks for variables that may contain personal identifiers such as medical record numbers and social security numbers. In addition, the macro identifies dates and numbers that may identify people who belong to small groups, who may be identifiable even in the absences of traditional identifiers. Evaluation of the macro on 100 sample research data sets indicated a recall of 0.98 and precision of 0.81. When implemented consistently, the macro has the potential to streamline the PHI review process and significantly reduce accidental PHI disclosures.
The Effect of Viewing Eccentricity on Enumeration
Palomares, Melanie; Smith, Paul R.; Pitts, Carole Holley; Carter, Breana M.
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities. PMID:21695212
The effect of viewing eccentricity on enumeration.
Palomares, Melanie; Smith, Paul R; Pitts, Carole Holley; Carter, Breana M
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities.
Arabidopsis research requires a critical re-evaluation of genetic tools.
Nikonorova, Natalia; Yue, Kun; Beeckman, Tom; De Smet, Ive
2018-06-27
An increasing number of reports question conclusions based on loss-of-function lines that have unexpected genetic backgrounds. In this opinion paper, we urge researchers to meticulously (re)investigate phenotypes retrieved from various genetic backgrounds and be critical regarding some previously drawn conclusions. As an example, we provide new evidence that acr4-2 mutant phenotypes with respect to columella stem cells are due to the lack of ACR4 and not - at least not as a major contributor - to a mutation in QRT1. In addition, we take the opportunity to alert the scientific community about the qrt1-2 background of a large number of Syngenta Arabidopsis Insertion Library (SAIL) T-DNA lines, a feature that is not commonly recognized by Arabidopsis researchers. This qrt1-2 background might have an important impact on the interpretation of the results obtained using these research tools, now and in the past. In conclusion, as a community, we should continuously assess and - if necessary - correct our conclusions based on the large number of (genetic) tools our work is built on. In addition, the positive or negative results of this self-criticism should be made available to the scientific community.
Szadokierski, Isadora; Burns, Matthew K
2008-10-01
Drill procedures have been used to increase the retention of various types of information, but little is known about the causal mechanisms of these techniques. The current study compared the effect of two key features of drill procedures, a large number of opportunities to respond (OTR) and a drill ratio that maintains a high percentage of known to unknown items (90% known). Using a factorial design, 27 4th graders were taught the pronunciation and meaning of Esperanto words using four versions of incremental rehearsal that varied on two factors, percentage of known words (high - 90% vs. moderate - 50%) and the number of OTR (high vs. low). A within-subject ANOVA revealed a significant main effect for OTR and non-significant effects for drill ratio and the interaction between the two variables. Moreover, it was found that increasing OTR from low to high yielded a large effect size (d=2.46), but increasing the percentage of known material from moderate (50%) to high (90%) yielded a small effect (d=0.16). These results suggest that a high number of OTR may be a key feature of flashcard drill techniques in promoting learning and retention.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
Genetic progress estimation strategy for upright common bean plants using recurrent selection.
Pereira, L A; Abreu, A F B; Júnior, I C Vieira; Pires, L P M; Ramalho, M A P
2017-03-22
Common bean producers in Brazil tend to grow plants as upright as possible. Because the control of this trait involves a large number of genes, recurrent selection (RS) is the best approach for successful plant improvement. Because plant architecture (PA) is evaluated using scores and usually has high heritability, RS for PA is performed through visual selection in generation S 0 . The aim of the present study was to evaluate selection progress and investigate whether this progress varies with the number of selected progenies or the generation evaluated. In addition, the effect of RS for the upright (PA) trait on progeny grain yield (GY) was investigated. Data of progenies S 0:3 and S 0:4 of the fifth, eighth, and twelfth cycles were used. A combined analysis of variance was performed using the adjusted means of the 47 best progenies from each generation and cycle, using two control cultivars as reference. A joint analysis of the two generations used during the evaluation of progenies for the different cycles was also performed. The genetic progress (GP) was estimated by fitting a linear regression equation to the relationship between the adjusted mean of each cycle and the number of cycles. We found that RS was efficient and the estimated GP of the evaluated progenies was 4.5%. Based on the GY heritability estimates, in more advanced generation selection for GY can be successfully performed on progenies. Thus, the selection already done for PA in F 2 could be associated to the most productive progenies.
Pollen-limited reproduction in blue oak: Implications for wind pollination in fragmented populations
Knapp, E.E.; Goedde, M.A.; Rice, K.J.
2001-01-01
Human activities are fragmenting forests and woodlands worldwide, but the impact of reduced tree population densities on pollen transfer in wind-pollinated trees is poorly understood. In a 4-year study, we evaluated relationships among stand density, pollen availability, and seed production in a thinned and fragmented population of blue oak (Quercus douglasii). Geographic coordinates were established and flowering interval determined for 100 contiguous trees. The number of neighboring trees within 60 m that released pollen during each tree's flowering period was calculated and relationships with acorn production explored using multiple regression. We evaluated the effects of female flower production, average temperature, and relative humidity during the pollination period, and number of pollen-producing neighbors on individual trees' acorn production. All factors except temperature were significant in at least one of the years of our study, but the combination of factors influencing acorn production varied among years. In 1996, a year of large acorn crop size, acorn production was significantly positively associated with number of neighboring pollen producers and density of female flowers. In 1997, 1998, and 1999, many trees produced few or no acorns, and significant associations between number of pollen-producing neighbors and acorn production were only apparent among moderately to highly reproductive trees. Acorn production by these reproductive trees in 1997 was significantly positively associated with number of neighboring pollen producers and significantly negatively associated with average relative humidity during the pollination period. In 1998, no analysis was possible, because too few trees produced a moderate to large acorn crop. Only density of female flowers was significantly associated with acorn production of moderately to highly reproductive trees in 1999. The effect of spatial scale was also investigated by conducting analyses with pollen producers counted in radii ranging from 30 m to 80 m. The association between number of pollen-producing neighbors and acorn production was strongest when neighborhood sizes of 60 m or larger were considered. Our results suggest that fragmentation and thinning of blue oak woodlands may reduce pollen availability and limit reproduction in this wind-pollinated species.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Human Papillomavirus (HPV) Genotyping: Automation and Application in Routine Laboratory Testing
Torres, M; Fraile, L; Echevarria, JM; Hernandez Novoa, B; Ortiz, M
2012-01-01
A large number of assays designed for genotyping human papillomaviruses (HPV) have been developed in the last years. They perform within a wide range of analytical sensitivity and specificity values for the different viral types, and are used either for diagnosis, epidemiological studies, evaluation of vaccines and implementing and monitoring of vaccination programs. Methods for specific genotyping of HPV-16 and HPV-18 are also useful for the prevention of cervical cancer in screening programs. Some commercial tests are, in addition, fully or partially automated. Automation of HPV genotyping presents advantages such as the simplicity of the testing procedure for the operator, the ability to process a large number of samples in a short time, and the reduction of human errors from manual operations, allowing a better quality assurance and a reduction of cost. The present review collects information about the current HPV genotyping tests, with special attention to practical aspects influencing their use in clinical laboratories. PMID:23248734
Design of optical mirror structures
NASA Technical Reports Server (NTRS)
Soosaar, K.
1971-01-01
The structural requirements for large optical telescope mirrors was studied with a particular emphasis placed on the three-meter Large Space Telescope primary mirror. Analysis approaches through finite element methods were evaluated with the testing and verification of a number of element types suitable for particular mirror loadings and configurations. The environmental conditions that a mirror will experience were defined and a candidate list of suitable mirror materials with their properties compiled. The relation of the mirror mechanical behavior to the optical performance is discussed and a number of suitable design criteria are proposed and implemented. A general outline of a systematic method to obtain the best structure for the three-meter diffraction-limited system is outlined. Finite element programs, using the STRUDL 2 analysis system, were written for specific mirror structures encompassing all types of active and passive mirror designs. Parametric studies on support locations, effects of shear deformation, diameter to thickness ratios, lightweight and sandwich mirror configurations, and thin shell active mirror needs were performed.
Applicability of a Conservative Margin Approach for Assessing NDE Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, ajay M.
2007-01-01
Nondestructive Evaluation (NDE) procedures are required to detect flaws in structures with a high percentage detectability and high confidence. Conventional Probability of Detection (POD) methods are statistical in nature and require detection data from a relatively large number of flaw specimens. In many circumstances, due to the high cost and long lead time, it is impractical to build the large set of flaw specimens that is required by the conventional POD methodology. Therefore, in such situations it is desirable to have a flaw detectability estimation approach that allows for a reduced number of flaw specimens but provides a high degree of confidence in establishing the flaw detectability size. This paper presents an alternative approach called the conservative margin approach (CMA). To investigate the applicability of the CMA approach, flaw detectability sizes determined by the CMA and POD approaches have been compared on actual datasets. The results of these comparisons are presented and the applicability of the CMA approach is discussed.
Hanemaaijer, Nicolien M; Sikkema-Raddatz, Birgit; van der Vries, Gerben; Dijkhuizen, Trijnie; Hordijk, Roel; van Essen, Anthonie J; Veenstra-Knol, Hermine E; Kerstjens-Frederikse, Wilhelmina S; Herkert, Johanna C; Gerkes, Erica H; Leegte, Lamberta K; Kok, Klaas; Sinke, Richard J; van Ravenswaaij-Arts, Conny M A
2012-01-01
The correct interpretation of copy number gains in patients with developmental delay and multiple congenital anomalies is hampered by the large number of copy number variations (CNVs) encountered in healthy individuals. The variable phenotype associated with copy number gains makes interpretation even more difficult. Literature shows that inheritence, size and presence in healthy individuals are commonly used to decide whether a certain copy number gain is pathogenic, but no general consensus has been established. We aimed to develop guidelines for interpreting gains detected by array analysis using array CGH data of 300 patients analysed with the 105K Agilent oligo array in a diagnostic setting. We evaluated the guidelines in a second, independent, cohort of 300 patients. In the first 300 patients 797 gains of four or more adjacent oligonucleotides were observed. Of these, 45.4% were de novo and 54.6% were familial. In total, 94.8% of all de novo gains and 87.1% of all familial gains were concluded to be benign CNVs. Clinically relevant gains ranged from 288 to 7912 kb in size, and were significantly larger than benign gains and gains of unknown clinical relevance (P<0.001). Our study showed that a threshold of 200 kb is acceptable in a clinical setting, whereas heritability does not exclude a pathogenic nature of a gain. Evaluation of the guidelines in the second cohort of 300 patients revealed that the interpretation guidelines were clear, easy to follow and efficient. PMID:21934709
USSR and Eastern Europe Scientific Abstracts, Geophysics, Astronomy and Space, Number 392.
1977-03-15
evaluation of the parameters of the observed field. It is proposed that for models formed from a set of elements as described that the problem of...the differential energy spectra for protons during the time of large flares on the sun. [303] IMPROVEMENT OF AES ORBITAL ELEMENTS Moscow...Leningrad, ULUSHSHENIYE ORBITAL’NYKH ELEMENTOV ISZ (Improvement in the Orbital Elements of an Artificial Earth Satellite), Leningrad Forestry Academy
Analysis of a Distributed Pulse Power System Using a Circuit Analysis Code
1979-06-01
dose rate was then integrated to give a number that could be compared with measure- ments made using thermal luminescent dosimeters ( TLD ’ s). Since...NM 8 7117 AND THE BDM CORPORATION, ALBUQUERQUE, NM 87106 Abstract A sophisticated computer code (SCEPTRE), used to analyze electronic circuits...computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash X-ray machine. This device was
1983-06-01
large species lists into single numerical expressions. Species diversity is usually - defined as a function of the number of species (i.e. species...1958, Lloyd and Ghelardi 1964, Pielou 1969). The primary motivation * for calculating species diversity indices based on richness or abundance is...diversity was an intrinsic property in ecological processes and an important factor in defining ecosystem structure and function (McArthur 1955
Increased numbers of Demodex in contact lens wearers.
Jalbert, Isabelle; Rejab, Shazana
2015-06-01
The aim of this study was to determine if Demodex infestation is more frequent in contact lens wearers than in nonwearers. Secondary aims were to evaluate the effects of Demodex on the ocular surface (symptoms and signs) and to evaluate the ability of confocal laser scanning microscopy to detect and quantify the Demodex infestation compared with the conventional light microscopic technique. Forty Asian female participants (20 nonwearers, 20 lens wearers) with a mean (± SD) age of 27 (± 9) years were recruited. Ocular comfort scores (Ocular Surface Disease Index, Ocular Comfort Index, and Dry Eye Questionnaire), vital staining (corneal, conjunctival, and lid wiper), tear osmolarity, tear breakup time, and meibomian gland evaluation were evaluated. Demodex was detected using in vivo confocal microscopy and conventional light microscopy. The number of Demodex was higher in lens wearers than in nonwearers (7.6 [± 5.8] vs. 5.0 [± 3.1]; p = 0.02). Demodex was observed in a large majority (90%) of lens wearers and in 65% of nonwearers using confocal microscopy (p = 0.06). The detection rate was lower in both groups using conventional light microscopy (p = 0.003) where Demodex could only be confirmed in 70% and 60% of lens wearers and nonwearers, respectively. The number of Demodex tended to increase with age (ρ = 0.28, p = 0.08), but Demodex did not appear to affect ocular comfort or any clinical signs (p > 0.05). Contact lens wearers harbor Demodex as frequently as nonwearers and in higher numbers, which is best detected using in vivo confocal microscopy. The significance of these findings is uncertain because no associations were found with any symptoms and signs of dry eye disease.
Evaluation of catch-and-release regulations on Brook Trout in Pennsylvania streams
Jason Detar,; Kristine, David; Wagner, Tyler; Greene, Tom
2014-01-01
In 2004, the Pennsylvania Fish and Boat Commission implemented catch-and-release (CR) regulations on headwater stream systems to determine if eliminating angler harvest would result in an increase in the number of adult (≥100 mm) or large (≥175 mm) Brook Trout Salvelinus fontinalis. Under the CR regulations, angling was permitted on a year-round basis, no Brook Trout could be harvested at any time, and there were no tackle restrictions. A before-after–control-impact design was used to evaluate the experimental regulations. Brook Trout populations were monitored in 16 treatment (CR regulations) and 7 control streams (statewide regulations) using backpack electrofishing gear periodically for up to 15 years (from 1990 to 2003 or 2004) before the implementation of the CR regulations and over a 7–8-year period (from 2004 or 2005 to 2011) after implementation. We used Poisson mixed models to evaluate whether electrofishing catch per effort (CPE; catch/100 m2) of adult (≥100 mm) or large (≥175 mm) Brook Trout increased in treatment streams as a result of implementing CR regulations. Brook Trout CPE varied among sites and among years, and there was no significant effect (increase or decrease) of CR regulations on the CPE of adult or large Brook Trout. Results of our evaluation suggest that CR regulations were not effective at improving the CPE of adult or large Brook Trout in Pennsylvania streams. Low angler use, high voluntary catch and release, and slow growth rates in infertile headwater streams are likely the primary reasons for the lack of response.
[Decision modeling for economic evaluation of health technologies].
de Soárez, Patrícia Coelho; Soares, Marta Oliveira; Novaes, Hillegonda Maria Dutilh
2014-10-01
Most economic evaluations that participate in decision-making processes for incorporation and financing of technologies of health systems use decision models to assess the costs and benefits of the compared strategies. Despite the large number of economic evaluations conducted in Brazil, there is a pressing need to conduct an in-depth methodological study of the types of decision models and their applicability in our setting. The objective of this literature review is to contribute to the knowledge and use of decision models in the national context of economic evaluations of health technologies. This article presents general definitions about models and concerns with their use; it describes the main models: decision trees, Markov chains, micro-simulation, simulation of discrete and dynamic events; it discusses the elements involved in the choice of model; and exemplifies the models addressed in national economic evaluation studies of diagnostic and therapeutic preventive technologies and health programs.
NASA Technical Reports Server (NTRS)
Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl
2017-01-01
Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.
Smith, Morgan; Warland, Jane; Smith, Colleen
2012-03-01
Online role-play has the potential to actively engage students in authentic learning experiences and help develop their clinical reasoning skills. However, evaluation of student learning for this kind of simulation focuses mainly on the content and outcome of learning, rather than on the process of learning through student engagement. This article reports on the use of a student engagement framework to evaluate an online role-play offered as part of a course in Bachelor of Nursing and Bachelor of Midwifery programs. Instruments that measure student engagement to date have targeted large numbers of students at program and institutional levels, rather than at the level of a specific learning activity. Although the framework produced some useful findings for evaluation purposes, further refinement of the questions is required to be certain that deep learning results from the engagement that occurs with course-level learning initiatives. Copyright 2012, SLACK Incorporated.
Genomic Selection in Dairy Cattle: The USDA Experience.
Wiggans, George R; Cole, John B; Hubbard, Suzanne M; Sonstegard, Tad S
2017-02-08
Genomic selection has revolutionized dairy cattle breeding. Since 2000, assays have been developed to genotype large numbers of single-nucleotide polymorphisms (SNPs) at relatively low cost. The first commercial SNP genotyping chip was released with a set of 54,001 SNPs in December 2007. Over 15,000 genotypes were used to determine which SNPs should be used in genomic evaluation of US dairy cattle. Official USDA genomic evaluations were first released in January 2009 for Holsteins and Jerseys, in August 2009 for Brown Swiss, in April 2013 for Ayrshires, and in April 2016 for Guernseys. Producers have accepted genomic evaluations as accurate indications of a bull's eventual daughter-based evaluation. The integration of DNA marker technology and genomics into the traditional evaluation system has doubled the rate of genetic progress for traits of economic importance, decreased generation interval, increased selection accuracy, reduced previous costs of progeny testing, and allowed identification of recessive lethals.
TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.
2011-04-10
Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers frommore » their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.« less
2010-01-01
Background Five DNA regions, namely, rbcL, matK, ITS, ITS2, and psbA-trnH, have been recommended as primary DNA barcodes for plants. Studies evaluating these regions for species identification in the large plant taxon, which includes a large number of closely related species, have rarely been reported. Results The feasibility of using the five proposed DNA regions was tested for discriminating plant species within Asteraceae, the largest family of flowering plants. Among these markers, ITS2 was the most useful in terms of universality, sequence variation, and identification capability in the Asteraceae family. The species discriminating power of ITS2 was also explored in a large pool of 3,490 Asteraceae sequences that represent 2,315 species belonging to 494 different genera. The result shows that ITS2 correctly identified 76.4% and 97.4% of plant samples at the species and genus levels, respectively. In addition, ITS2 displayed a variable ability to discriminate related species within different genera. Conclusions ITS2 is the best DNA barcode for the Asteraceae family. This approach significantly broadens the application of DNA barcoding to resolve classification problems in the family Asteraceae at the genera and species levels. PMID:20977734
Very-large-area CCD image sensors: concept and cost-effective research
NASA Astrophysics Data System (ADS)
Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.
2009-01-01
A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.
Satellite relay telemetry of seismic data in earthquake prediction and control
Jackson, Wayne H.; Eaton, Jerry P.
1971-01-01
The Satellite Telemetry Earthquake Monitoring Program was started in FY 1968 to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research (NCER) in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project.
Diploid male dynamics under different numbers of sexual alleles and male dispersal abilities.
Faria, Luiz R R; Soares, Elaine Della Giustina; Carmo, Eduardo do; Oliveira, Paulo Murilo Castro de
2016-09-01
Insects in the order Hymenoptera (bees, wasps and ants) present an haplodiploid system of sexual determination in which fertilized eggs become females and unfertilized eggs males. Under single locus complementary sex-determination (sl-CSD) system, the sex of a specimen depends on the alleles at a single locus: when diploid, an individual will be a female if heterozygous and male if homozygous. Significant diploid male (DM) production may drive a population to an extinction scenario called "diploid male vortex". We aimed at studying the dynamics of populations of a sl-CSD organism under several combinations of two parameters: male flight abilities and number of sexual alleles. In these simulations, we evaluated the frequency of DM and a genetic diversity measure over 10,000 generations. The number of sexual alleles varied from 10 to 100 and, at each generation, a male offspring might fly to another random site within a varying radius R. Two main results emerge from our simulations: (i) the number of DM depends more on male flight radius than on the number of alleles; (ii) in large geographic regions, the effect of males flight radius on the allelic diversity turns out much less pronounced than in small regions. In other words, small regions where inbreeding normally appears recover genetic diversity due to large flight radii. These results may be particularly relevant when considering the population dynamics of species with increasingly limited dispersal ability (e.g., forest-dependent species of euglossine bees in fragmented landscapes).
Jacques, Nathalie; Vimond, Nadege; Conforti, Rosa; Griscelli, Franck; Lecluse, Yann; Laplanche, Agnes; Malka, David; Vielh, Philippe; Farace, Françoise
2008-09-15
Circulating endothelial cells (CEC) are currently proposed as a potential biomarker for measuring the impact of anti-angiogenic treatments in cancer. However, the lack of consensus on the appropriate method of CEC measurement has led to conflicting data in cancer patients. A validated assay adapted for evaluating the clinical utility of CEC in large cohorts of patients undergoing anti-angiogenic treatments is needed. We developed a four-color flow cytometric assay to measure CEC as CD31(+), CD146(+), CD45(-), 7-amino-actinomycin-D (7AAD)(-) events in whole blood. The distinctive features of the assay are: (1) staining of 1 ml whole blood, (2) use of a whole blood IgPE control to measure accurately background noise, (3) accumulation of a large number of events (almost 5 10(6)) to ensure statistical analysis, and (4) use of 10 microm fluorescent microbeads to evaluate the event size. Assay reproducibility was determined in duplicate aliquots of samples drawn from 20 metastatic cancer patients. Assay linearity was tested by spiking whole blood with low numbers of HUVEC. Five-color flow cytometric experiments with CD144 were performed to confirm the endothelial origin of the cells. CEC were measured in 20 healthy individuals and 125 patients with metastatic cancer. Reproducibility was good between duplicate aliquots (r(2)=0.948, mean difference between duplicates of 0.86 CEC/ml). Detected HUVEC correlated with spiked HUVEC (r(2)=0.916, mean recovery of 100.3%). Co-staining of CD31, CD146 and CD144 confirmed the endothelial nature of cells identified as CEC. Median CEC levels were 6.5/ml (range, 0-15) in healthy individuals and 15.0/ml (range, 0-179) in patients with metastatic carcinoma (p<0.001). The assay proposed here allows reproducible and sensitive measurement of CEC by flow cytometry and could help evaluate CEC as biomarkers of anti-angiogenic therapies in large cohorts of patients.
A high-throughput core sampling device for the evaluation of maize stalk composition
2012-01-01
Background A major challenge in the identification and development of superior feedstocks for the production of second generation biofuels is the rapid assessment of biomass composition in a large number of samples. Currently, highly accurate and precise robotic analysis systems are available for the evaluation of biomass composition, on a large number of samples, with a variety of pretreatments. However, the lack of an inexpensive and high-throughput process for large scale sampling of biomass resources is still an important limiting factor. Our goal was to develop a simple mechanical maize stalk core sampling device that can be utilized to collect uniform samples of a dimension compatible with robotic processing and analysis, while allowing the collection of hundreds to thousands of samples per day. Results We have developed a core sampling device (CSD) to collect maize stalk samples compatible with robotic processing and analysis. The CSD facilitates the collection of thousands of uniform tissue cores consistent with high-throughput analysis required for breeding, genetics, and production studies. With a single CSD operated by one person with minimal training, more than 1,000 biomass samples were obtained in an eight-hour period. One of the main advantages of using cores is the high level of homogeneity of the samples obtained and the minimal opportunity for sample contamination. In addition, the samples obtained with the CSD can be placed directly into a bath of ice, dry ice, or liquid nitrogen maintaining the composition of the biomass sample for relatively long periods of time. Conclusions The CSD has been demonstrated to successfully produce homogeneous stalk core samples in a repeatable manner with a throughput substantially superior to the currently available sampling methods. Given the variety of maize developmental stages and the diversity of stalk diameter evaluated, it is expected that the CSD will have utility for other bioenergy crops as well. PMID:22548834
Large Scale Application of Vibration Sensors for Fan Monitoring at Commercial Layer Hen Houses
Chen, Yan; Ni, Ji-Qin; Diehl, Claude A.; Heber, Albert J.; Bogan, Bill W.; Chai, Li-Long
2010-01-01
Continuously monitoring the operation of each individual fan can significantly improve the measurement quality of aerial pollutant emissions from animal buildings that have a large number of fans. To monitor the fan operation by detecting the fan vibration is a relatively new technique. A low-cost electronic vibration sensor was developed and commercialized. However, its large scale application has not yet been evaluated. This paper presents long-term performance results of this vibration sensor at two large commercial layer houses. Vibration sensors were installed on 164 fans of 130 cm diameter to continuously monitor the fan on/off status for two years. The performance of the vibration sensors was compared with fan rotational speed (FRS) sensors. The vibration sensors exhibited quick response and high sensitivity to fan operations and therefore satisfied the general requirements of air quality research. The study proved that detecting fan vibration was an effective method to monitor the on/off status of a large number of single-speed fans. The vibration sensor itself was $2 more expensive than a magnetic proximity FRS sensor but the overall cost including installation and data acquisition hardware was $77 less expensive than the FRS sensor. A total of nine vibration sensors failed during the study and the failure rate was related to the batches of product. A few sensors also exhibited unsteady sensitivity. As a new product, the quality of the sensor should be improved to make it more reliable and acceptable. PMID:22163544
Simulation studies using multibody dynamics code DART
NASA Technical Reports Server (NTRS)
Keat, James E.
1989-01-01
DART is a multibody dynamics code developed by Photon Research Associates for the Air Force Astronautics Laboratory (AFAL). The code is intended primarily to simulate the dynamics of large space structures, particularly during the deployment phase of their missions. DART integrates nonlinear equations of motion numerically. The number of bodies in the system being simulated is arbitrary. The bodies' interconnection joints can have an arbitrary number of degrees of freedom between 0 and 6. Motions across the joints can be large. Provision for simulating on-board control systems is provided. Conservation of energy and momentum, when applicable, are used to evaluate DART's performance. After a brief description of DART, studies made to test the program prior to its delivery to AFAL are described. The first is a large angle reorientating of a flexible spacecraft consisting of a rigid central hub and four flexible booms. Reorientation was accomplished by a single-cycle sine wave shape torque input. In the second study, an appendage, mounted on a spacecraft, was slewed through a large angle. Four closed-loop control systems provided control of this appendage and of the spacecraft's attitude. The third study simulated the deployment of the rim of a bicycle wheel configuration large space structure. This system contained 18 bodies. An interesting and unexpected feature of the dynamics was a pulsing phenomena experienced by the stays whole playout was used to control the deployment. A short description of the current status of DART is given.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Test and evaluation procedures for Sandia's Teraflops Operating System (TOS) on Janus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel Wayne
This report describes the test and evaluation methods by which the Teraflops Operating System, or TOS, that resides on Sandia's massively-parallel computer Janus is verified for production release. Also discussed are methods used to build TOS before testing and evaluating, miscellaneous utility scripts, a sample test plan, and a proposed post-test method for quickly examining the large number of test results. The purpose of the report is threefold: (1) to provide a guide to T&E procedures, (2) to aid and guide others who will run T&E procedures on the new ASCI Red Storm machine, and (3) to document some ofmore » the history of evaluation and testing of TOS. This report is not intended to serve as an exhaustive manual for testers to conduct T&E procedures.« less
Systematic modelling and design evaluation of unperturbed tumour dynamics in xenografts.
Parra Guillen, Zinnia P Patricia; Mangas Sanjuan, Victor; Garcia-Cremades, Maria; Troconiz, Inaki F; Mo, Gary; Pitou, Celine; Iversen, Philip W; Wallin, Johan E
2018-04-24
Xenograft mice are largely used to evaluate the efficacy of oncological drugs during preclinical phases of drug discovery and development. Mathematical models provide a useful tool to quantitatively characterise tumour growth dynamics and also optimise upcoming experiments. To the best of our knowledge, this is the first report where unperturbed growth of a large set of tumour cell lines (n=28) has been systematically analysed using the model proposed by Simeoni in the context of non-linear mixed effect (NLME). Exponential growth was identified as the governing mechanism in the majority of the cell lines, with constant rate values ranging from 0.0204 to 0.203 day -1 No common patterns could be observed across tumour types, highlighting the importance of combining information from different cell lines when evaluating drug activity. Overall, typical model parameters were precisely estimated using designs where tumour size measurements were taken every two days. Moreover, reducing the number of measurement to twice per week, or even once per week for cell lines with low growth rates, showed little impact on parameter precision. However, in order to accurately characterise parameter variability (i.e. relative standard errors below 50%), a sample size of at least 50 mice is needed. This work illustrates the feasibility to systematically apply NLME models to characterise tumour growth in drug discovery and development, and constitutes a valuable source of data to optimise experimental designs by providing an a priori sampling window and minimising the number of samples required. The American Society for Pharmacology and Experimental Therapeutics.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Oblique-wing research airplane motion simulation with decoupling control laws
NASA Technical Reports Server (NTRS)
Kempel, Robert W.; Mc Neill, Walter E.; Maine, Trindel A.
1988-01-01
A large piloted vertical motion simulator was used to assess the performance of a preliminary decoupling control law for an early version of the F-8 oblique wing research demonstrator airplane. Evaluations were performed for five discrete flight conditions, ranging from low-altitude subsonic Mach numbers to moderate-altitude supersonic Mach numbers. Asymmetric sideforce as a function of angle of attack was found to be the primary cause of both the lateral acceleration noted in pitch and the tendency to roll into left turns and out of right turns. The flight control system was shown to be effective in generally decoupling the airplane and reducing the lateral acceleration in pitch maneuvers.
NASA Technical Reports Server (NTRS)
Nagaraja, K. S.; Kraft, R. H.
1999-01-01
The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.
NASA Astrophysics Data System (ADS)
Totzeck, Michael
The intention of this chapter is to provide a fast and comprehensive overview of the principles of interferometry and the various types of interferometer, including interferogram evaluation and applications. Due to the age and the importance of the subject, you can find a number of monographs [16.1,2,3,4] and book chapters [16.5] in the literature. The number of original papers on optical interferometry is far too large to even attempt complete coverage in this chapter. Whenever possible, review papers are cited. Original papers are cited according to their aptness as starting points into the subject. This, however, reflects my personal judgment. Even if you do not share my opinion, you should find the references therein useful.
Onsite aerosol measurements for various engineered nanomaterials at industrial manufacturing plants
NASA Astrophysics Data System (ADS)
Ogura, I.; Sakurai, H.; Gamo, M.
2011-07-01
Evaluation of the health impact of and control over exposure to airborne engineered nanomaterials (ENMs) requires information, inter alia, the magnitude of environmental release during various industrial processes, as well as the size distribution and morphology of the airborne ENM particles. In this study, we performed onsite aerosol measurements for various ENMs at industrial manufacturing plants. The industrial processes investigated were the collection of SiC from synthesis reactors, synthesis and bagging of LiFePO4, and bagging of ZnO. Real-time aerosol monitoring using condensation particle counters, optical particle counters, and an electrical low-pressure impactor revealed frequent increases in the number concentrations of submicron- and micron-sized aerosol particles, but few increases in the number concentrations of nanoparticles. In the SEM observations, a large number of submicron- and micron-sized agglomerated ENM particles were observed.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2008-01-01
This paper describes an approach which aims at bridging the gap between the traditional Reynolds-averaged Navier-Stokes (RANS) approach and the traditional large eddy simulation (LES) approach. It has the characteristics of the very large eddy simulation (VLES) and we call this approach the partially-resolved numerical simulation (PRNS). Systematic simulations using the National Combustion Code (NCC) have been carried out for fully developed turbulent pipe flows at different Reynolds numbers to evaluate the PRNS approach. Also presented are the sample results of two demonstration cases: nonreacting flow in a single injector flame tube and reacting flow in a Lean Direct Injection (LDI) hydrogen combustor.
Large-eddy simulation of a backward facing step flow using a least-squares spectral element method
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Mittal, Rajat
1996-01-01
We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.
Urrestarazu, Jorge; Royo, José B.; Santesteban, Luis G.; Miranda, Carlos
2015-01-01
Fingerprinting information can be used to elucidate in a robust manner the genetic structure of germplasm collections, allowing a more rational and fine assessment of genetic resources. Bayesian model-based approaches are nowadays majorly preferred to infer genetic structure, but it is still largely unresolved how marker sets should be built in order to obtain a robust inference. The objective was to evaluate, in Pyrus germplasm collections, the influence of the SSR marker set size on the genetic structure inferred, also evaluating the influence of the criterion used to select those markers. Inferences were performed considering an increasing number of SSR markers that ranged from just two up to 25, incorporated one at a time into the analysis. The influence of the number of SSR markers used was evaluated comparing the number of populations and the strength of the signal detected, and also the similarity of the genotype assignments to populations between analyses. In order to test if those results were influenced by the criterion used to select the SSRs, several choosing scenarios based on the discrimination power or the fixation index values of the SSRs were tested. Our results indicate that population structure could be inferred accurately once a certain SSR number threshold was reached, which depended on the underlying structure within the genotypes, but the method used to select the markers included on each set appeared not to be very relevant. The minimum number of SSRs required to provide robust structure inferences and adequate measurements of the differentiation, even when low differentiation levels exist within populations, was proved similar to that of the complete list of recommended markers for fingerprinting. When a SSR set size similar to the minimum marker sets recommended for fingerprinting it is used, only major divisions or moderate (F ST>0.05) differentiation of the germplasm are detected. PMID:26382618
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; ...
2015-06-19
A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations.more » The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.« less
Large density expansion of a hydrodynamic theory for self-propelled particles
NASA Astrophysics Data System (ADS)
Ihle, T.
2015-07-01
Recently, an Enskog-type kinetic theory for Vicsek-type models for self-propelled particles has been proposed [T. Ihle, Phys. Rev. E 83, 030901 (2011)]. This theory is based on an exact equation for a Markov chain in phase space and is not limited to small density. Previously, the hydrodynamic equations were derived from this theory and its transport coefficients were given in terms of infinite series. Here, I show that the transport coefficients take a simple form in the large density limit. This allows me to analytically evaluate the well-known density instability of the polarly ordered phase near the flocking threshold at moderate and large densities. The growth rate of a longitudinal perturbation is calculated and several scaling regimes, including three different power laws, are identified. It is shown that at large densities, the restabilization of the ordered phase at smaller noise is analytically accessible within the range of validity of the hydrodynamic theory. Analytical predictions for the width of the unstable band, the maximum growth rate, and for the wave number below which the instability occurs are given. In particular, the system size below which spatial perturbations of the homogeneous ordered state are stable is predicted to scale with where √ M is the average number of collision partners. The typical time scale until the instability becomes visible is calculated and is proportional to M.
Factors contributing to airborne particle dispersal in the operating room.
Noguchi, Chieko; Koseki, Hironobu; Horiuchi, Hidehiko; Yonekura, Akihiko; Tomita, Masato; Higuchi, Takashi; Sunagawa, Shinya; Osaki, Makoto
2017-07-06
Surgical-site infections due to intraoperative contamination are chiefly ascribable to airborne particles carrying microorganisms. The purpose of this study is to identify the actions that increase the number of airborne particles in the operating room. Two surgeons and two surgical nurses performed three patterns of physical movements to mimic intraoperative actions, such as preparing the instrument table, gowning and donning/doffing gloves, and preparing for total knee arthroplasty. The generation and behavior of airborne particles were filmed using a fine particle visualization system, and the number of airborne particles in 2.83 m 3 of air was counted using a laser particle counter. Each action was repeated five times, and the particle measurements were evaluated through one-way analysis of variance multiple comparison tests followed by Tukey-Kramer and Bonferroni-Dunn multiple comparison tests for post hoc analysis. Statistical significance was defined as a P value ≤ .01. A large number of airborne particles were observed while unfolding the surgical gown, removing gloves, and putting the arms through the sleeves of the gown. Although numerous airborne particles were observed while applying the stockinet and putting on large drapes for preparation of total knee arthroplasty, fewer particles (0.3-2.0 μm in size) were detected at the level of the operating table under laminar airflow compared to actions performed in a non-ventilated preoperative room (P < .01). The results of this study suggest that surgical staff should avoid unnecessary actions that produce a large number of airborne particles near a sterile area and that laminar airflow has the potential to reduce the incidence of bacterial contamination.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
2017-10-17
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds
Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena
2017-01-01
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines. PMID:29155739
Mazzone, Peter J.; Naidich, David P.; Bach, Peter B.
2013-01-01
Background: Lung cancer is by far the major cause of cancer deaths largely because in the majority of patients it is at an advanced stage at the time it is discovered, when curative treatment is no longer feasible. This article examines the data regarding the ability of screening to decrease the number of lung cancer deaths. Methods: A systematic review was conducted of controlled studies that address the effectiveness of methods of screening for lung cancer. Results: Several large randomized controlled trials (RCTs), including a recent one, have demonstrated that screening for lung cancer using a chest radiograph does not reduce the number of deaths from lung cancer. One large RCT involving low-dose CT (LDCT) screening demonstrated a significant reduction in lung cancer deaths, with few harms to individuals at elevated risk when done in the context of a structured program of selection, screening, evaluation, and management of the relatively high number of benign abnormalities. Whether other RCTs involving LDCT screening are consistent is unclear because data are limited or not yet mature. Conclusions: Screening is a complex interplay of selection (a population with sufficient risk and few serious comorbidities), the value of the screening test, the interval between screening tests, the availability of effective treatment, the risk of complications or harms as a result of screening, and the degree with which the screened individuals comply with screening and treatment recommendations. Screening with LDCT of appropriate individuals in the context of a structured process is associated with a significant reduction in the number of lung cancer deaths in the screened population. Given the complex interplay of factors inherent in screening, many questions remain on how to effectively implement screening on a broader scale. PMID:23649455
Basu, Sumita; Plawsky, Joel L; Wayner, Peter C
2004-11-01
In preparation for a microgravity flight experiment on the International Space Station, a constrained vapor bubble fin heat exchanger (CVB) was operated both in a vacuum chamber and in air on Earth to evaluate the effect of the absence of external natural convection. The long-term objective is a general study of a high heat flux, low capillary pressure system with small viscous effects due to the relatively large 3 x 3 x 40 mm dimensions. The current CVB can be viewed as a large-scale version of a micro heat pipe with a large Bond number in the Earth environment but a small Bond number in microgravity. The walls of the CVB are quartz, to allow for image analysis of naturally occurring interference fringes that give the pressure field for liquid flow. The research is synergistic in that the study requires a microgravity environment to obtain a low Bond number and the space program needs thermal control systems, like the CVB, with a large characteristic dimension. In the absence of natural convection, operation of the CVB may be dominated by external radiative losses from its quartz surface. Therefore, an understanding of radiation from the quartz cell is required. All radiative exchange with the surroundings occurs from the outer surface of the CVB when the temperature range renders the quartz walls of the CVB optically thick (lambda > 4 microns). However, for electromagnetic radiation where lambda < 2 microns, the walls are transparent. Experimental results obtained for a cell charged with pentane are compared with those obtained for a dry cell. A numerical model was developed that successfully simulated the behavior and performance of the device observed experimentally.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Horizontal Axis Wind Turbine Experiments at Full-Scale Reynolds Numbers
NASA Astrophysics Data System (ADS)
Miller, Mark; Kiefer, Janik; Nealon, Tara; Westergaard, Carsten; Hultmark, Marcus
2017-11-01
Achieving high Reynolds numbers on a wind turbine model remains a major challenge for experimentalists. Since Reynolds number effects need to be captured accurately, matching this parameter is of great importance. The challenge stems from the large scale ratio between model and full-size, typically on the order of 1:100. Traditional wind tunnels are limited due to finite tunnel size, with velocity as the only free-parameter available for increasing the Reynolds number. Unfortunately, increasing the velocity 100 times is untenable because it violates Mach number matching with the full-scale and results in unfeasible rotation rates. Present work in Princeton University's high pressure wind tunnel makes it possible to evaluate the Reynolds number sensitivity with regard to wind turbine aerodynamics. This facility, which uses compressed air as the working fluid, allows for adjustment of the Reynolds number, via the fluid density, independent of the Tip Speed Ratio (TSR) and Mach number. Power and thrust coefficients will be shown as a function of Reynolds number and TSR for a model wind turbine. The Reynolds number range investigated exceeds 10 ×106 based on diameter and free-stream conditions or 3 ×106 based on the tip chord, matching those of the full-scale. National Science Foundation and Andlinger Center for Energy and the Environment.
A comprehensive catalogue and classification of human thermal climate indices
NASA Astrophysics Data System (ADS)
de Freitas, C. R.; Grigorieva, E. A.
2015-01-01
The very large number of human thermal climate indices that have been proposed over the past 100 years or so is a manifestation of the perceived importance within the scientific community of the thermal environment and the desire to quantify it. Schemes used differ in approach according to the number of variables taken into account, the rationale employed, the relative sophistication of the underlying body-atmosphere heat exchange theory and the particular design for application. They also vary considerably in type and quality, as well as in several other aspects. Reviews appear in the literature, but they cover a limited number of indices. A project that produces a comprehensive documentation, classification and overall evaluation of the full range of existing human thermal climate indices has never been attempted. This paper deals with documentation and classification. A subsequent report will focus on evaluation. Here a comprehensive register of 162 thermal indices is assembled and a sorting scheme devised that groups them according to eight primary classification classes. It is the first stage in a project to organise and evaluate the full range of all human thermal climate indices. The work, when completed, will make it easier for users to reflect on the merits of all available thermal indices. It will be simpler to locate and compare indices and decide which is most appropriate for a particular application or investigation.
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-01-01
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801
A comprehensive catalogue and classification of human thermal climate indices.
de Freitas, C R; Grigorieva, E A
2015-01-01
The very large number of human thermal climate indices that have been proposed over the past 100 years or so is a manifestation of the perceived importance within the scientific community of the thermal environment and the desire to quantify it. Schemes used differ in approach according to the number of variables taken into account, the rationale employed, the relative sophistication of the underlying body-atmosphere heat exchange theory and the particular design for application. They also vary considerably in type and quality, as well as in several other aspects. Reviews appear in the literature, but they cover a limited number of indices. A project that produces a comprehensive documentation, classification and overall evaluation of the full range of existing human thermal climate indices has never been attempted. This paper deals with documentation and classification. A subsequent report will focus on evaluation. Here a comprehensive register of 162 thermal indices is assembled and a sorting scheme devised that groups them according to eight primary classification classes. It is the first stage in a project to organise and evaluate the full range of all human thermal climate indices. The work, when completed, will make it easier for users to reflect on the merits of all available thermal indices. It will be simpler to locate and compare indices and decide which is most appropriate for a particular application or investigation.
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-03-03
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.
Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation
NASA Technical Reports Server (NTRS)
Hong, Seunghun
2002-01-01
As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.
Rectenna session: Micro aspects
NASA Technical Reports Server (NTRS)
Gutmann, R. J.
1980-01-01
Two micro aspects of rectenna design are discussed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving.
USSR and Eastern Europe Scientific Abstracts, Engineering and Equipment. Number 25.
1976-10-29
is necessary to consider the problem of diffraction at a_cylindrical cavity. Some methods of solving this problem become very un- wieldy, when...applied to such a cavity of large wave dimensions, even with the aid of a digital computer. In the simpler Watson method , the series represent- ing the...potential of cylindrical waves is transformed to an integral in the complex plane and evaluated as the sum of residues. A difficulty in this method
Mechanisms of Protective Immunogenicity of Microbial Vaccines of Military Medical Significance.
1981-01-30
serodiagnosis of Q fever. We have had the opportunity to test a large number of positive samples from recent laboratory associated infections by both...note a remarkable stability of this assay in the uninfected individuals. The infected individuals show a clear change in FIAX activity after 0 fever...1979. { 3002 Brody, J.P., J.H. Binkley, and S.A. Harding. Evaluation and comparison of two assays for detection of immunity to rubella infection
Jennifer L. R. Jensen; Karen S. Humes; Andrew T. Hudak; Lee A. Vierling; Eric Delmelle
2011-01-01
This study presents an alternative assessment of the MODIS LAI product for a 58,000 ha evergreen needleleaf forest located in the western Rocky Mountain range in northern Idaho by using lidar data to model (R2=0.86, RMSE=0.76) and map LAI at higher resolution across a large number of MODIS pixels in their entirety. Moderate resolution (30 m) lidar-based LAI estimates...
A model for prioritizing landfills for remediation and closure: A case study in Serbia.
Ubavin, Dejan; Agarski, Boris; Maodus, Nikola; Stanisavljevic, Nemanja; Budak, Igor
2018-01-01
The existence of large numbers of landfills that do not fulfill sanitary prerequisites presents a serious hazard for the environment in lower income countries. One of the main hazards is landfill leachate that contains various pollutants and presents a threat to groundwater. Groundwater pollution from landfills depends on various mutually interconnected factors such as the waste type and amount, the amount of precipitation, the landfill location characteristics, and operational measures, among others. Considering these factors, lower income countries face a selection problem where landfills urgently requiring remediation and closure must be identified from among a large number of sites. The present paper proposes a model for prioritizing landfills for closure and remediation based on multicriteria decision making, in which the hazards of landfill groundwater pollution are evaluated. The parameters for the prioritization of landfills are the amount of waste disposed, the amount of precipitation, the vulnerability index, and the rate of increase of the amount of waste in the landfill. Verification was performed using a case study in Serbia where all municipal landfills were included and 128 landfills were selected for prioritization. The results of the evaluation of Serbian landfills, prioritizing sites for closure and remediation, are presented for the first time. Critical landfills are identified, and prioritization ranks for the selected landfills are provided. Integr Environ Assess Manag 2018;14:105-119. © 2017 SETAC. © 2017 SETAC.