Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
Capital Budgeting Decisions with Post-Audit Information
1990-06-08
estimates that were used during project selection. In similar fashion, this research introduces the equivalent sample size concept that permits the... equivalent sample size is extended to include the user’s prior beliefs. 4. For a management tool, the concepts for Cash Flow Control Charts are...Acoxxting Research , vol. 7, no. 2, Autumn 1969, pp. 215-244. [9] Gaynor, Edwin W., "Use of Control Charts in Cost Control ", National Association of Cost
Passive vs. Parachute System Architecture for Robotic Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; Henning, Allen B.; Samareh, Jamshid A.
2016-01-01
The Multi-Mission Earth Entry Vehicle (MMEEV) is a flexible vehicle concept based on the Mars Sample Return (MSR) EEV design which can be used in the preliminary sample return mission study phase to parametrically investigate any trade space of interest to determine the best entry vehicle design approach for that particular mission concept. In addition to the trade space dimensions often considered (e.g. entry conditions, payload size and mass, vehicle size, etc.), the MMEEV trade space considers whether it might be more beneficial for the vehicle to utilize a parachute system during descent/landing or to be fully passive (i.e. not use a parachute). In order to evaluate this trade space dimension, a simplified parachute system model has been developed based on inputs such as vehicle size/mass, payload size/mass and landing requirements. This model works in conjunction with analytical approximations of a mission trade space dataset provided by the MMEEV System Analysis for Planetary EDL (M-SAPE) tool to help quantify the differences between an active (with parachute) and a passive (no parachute) vehicle concept.
An opportunity cost approach to sample size calculation in cost-effectiveness analysis.
Gafni, A; Walter, S D; Birch, S; Sendi, P
2008-01-01
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.
Sample size considerations when groups are the appropriate unit of analyses
Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith
2007-01-01
This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Concept Study For A Near-term Mars Surface Sample Return Mission
NASA Astrophysics Data System (ADS)
Smith, M. F.; Thatcher, J.; Sallaberger, C.; Reedman, T.; Pillinger, C. T.; Sims, M. R.
The return of samples from the surface of Mars is a challenging problem. Present mission planning is for complex missions to return large, focused samples sometime in the next decade. There is, however, much scientific merit in returning a small sample of Martian regolith before the end of this decade at a fraction of the cost of the more ambitious missions. This paper sets out the key elements of this concept that builds on the work of the Beagle 2 project and space robotics work in Canada. The paper will expand the science case for returning a regolith sample that is only in the range of 50-250g but would nevertheless include plenty of interesting mate- rial as the regolith comprises soil grains from a wide variety of locations i.e. nearby rocks, sedimentary formations and materials moved by fluids, winds and impacts. It is possible that a fine core sample could also be extracted and returned. The mission concept is to send a lander sized at around 130kg on the 2007 or 2009 opportunity, immediately collect the sample from the surface, launch it to Mars orbit, collect it by the lander parent craft and make an immediate Earth return. Return to Earth orbit is envisaged rather than direct Earth re-entry. The lander concept is essen- tially a twice-size Beagle 2 carrying the sample collection and return capsule loading equipment plus the ascent vehicle. The return capsule is envisaged as no more than 1kg. An overall description of the mission along with methods for sample acquisition, or- bital rendezvous and capsule return will be outlined and the overall systems budgets presented. To demonstrate the near term feasibility of the mission, the use of existing Canadian and European technologies will be highlighted.
Children's Concepts of the Shape and Size of the Earth, Sun and Moon
NASA Astrophysics Data System (ADS)
Bryce, T. G. K.; Blown, E. J.
2013-02-01
Children's understandings of the shape and relative sizes of the Earth, Sun and Moon have been extensively researched and in a variety of ways. Much is known about the confusions which arise as young people try to grasp ideas about the world and our neighbouring celestial bodies. Despite this, there remain uncertainties about the conceptual models which young people use and how they theorise in the process of acquiring more scientific conceptions. In this article, the relevant published research is reviewed critically and in-depth in order to frame a series of investigations using semi-structured interviews carried out with 248 participants aged 3-18 years from China and New Zealand. Analysis of qualitative and quantitative data concerning the reasoning of these subjects (involving cognitive categorisations and their rank ordering) confirmed that (a) concepts of Earth shape and size are embedded in a 'super-concept' or 'Earth notion' embracing ideas of physical shape, 'ground' and 'sky', habitation of and identity with Earth; (b) conceptual development is similar in cultures where teachers hold a scientific world view and (c) children's concepts of shape and size of the Earth, Sun and Moon can be usefully explored within an ethnological approach using multi-media interviews combined with observational astronomy. For these young people, concepts of the shape and size of the Moon and Sun were closely correlated with their Earth notion concepts and there were few differences between the cultures despite their contrasts. Analysis of the statistical data used Kolmogorov-Smirnov Two-Sample Tests with hypotheses confirmed at K-S alpha level 0.05; rs : p < 0.01.
Atomistic origin of size effects in fatigue behavior of metallic glasses
NASA Astrophysics Data System (ADS)
Sha, Zhendong; Wong, Wei Hin; Pei, Qingxiang; Branicio, Paulo Sergio; Liu, Zishun; Wang, Tiejun; Guo, Tianfu; Gao, Huajian
2017-07-01
While many experiments and simulations on metallic glasses (MGs) have focused on their tensile ductility under monotonic loading, the fatigue mechanisms of MGs under cyclic loading still remain largely elusive. Here we perform molecular dynamics (MD) and finite element simulations of tension-compression fatigue tests in MGs to elucidate their fatigue mechanisms with focus on the sample size effect. Shear band (SB) thickening is found to be the inherent fatigue mechanism for nanoscale MGs. The difference in fatigue mechanisms between macroscopic and nanoscale MGs originates from whether the SB forms partially or fully through the cross-section of the specimen. Furthermore, a qualitative investigation of the sample size effect suggests that small sample size increases the fatigue life while large sample size promotes cyclic softening and necking. Our observations on the size-dependent fatigue behavior can be rationalized by the Gurson model and the concept of surface tension of the nanovoids. The present study sheds light on the fatigue mechanisms of MGs and can be useful in interpreting previous experimental results.
NASA Astrophysics Data System (ADS)
Yalçınkaya, Eylem; Taştan-Kırık, Özgecan; Boz, Yezdan; Yıldıran, Demet
2012-07-01
Background: Case-based learning (CBL) is simply teaching the concept to the students based on the cases. CBL involves a case, which is a scenario based on daily life, and study questions related to the case, which allows students to discuss their ideas. Chemical kinetics is one of the most difficult concepts for students in chemistry. Students have generally low levels of conceptual understanding and many alternative conceptions regarding it. Purpose: This study aimed to explore the effect of CBL on dealing with students' alternative conceptions about chemical kinetics. Sample: The sample consists of 53 high school students from one public high school in Turkey. Design and methods : Nonequivalent pre-test and post-test control group design was used. Reaction Rate Concept Test and semi-structured interviews were used for data collection. Convenience sampling technique was followed. For data analysis, the independent samples t-test and ANOVA was performed. Results : Both concept test and interview results showed that students instructed with cases had better understanding of core concepts of chemical kinetics and had less alternative conceptions related to the subject matter compared to the control group students, despite the fact that it was impossible to challenge all the alternative conceptions in the experimental group. Conclusions: CBL is an effective teaching method for challenging students' alternative conceptions in the context of chemical kinetics. Since using cases in small groups and whole class discussions has been found to be an effective way to cope with the alternative conceptions, it can be applied to other subjects and grade levels in high schools with a higher sample size. Furthermore, the effect of this method on academic achievement, motivation and critical thinking skills are other variables that can be investigated for future studies in the subject area of chemistry.
A Kepler Mission, A Search for Habitable Planets: Concept, Capabilities and Strengths
NASA Technical Reports Server (NTRS)
Koch, David; Borucki, William; Lissauer, Jack; Dunham, Edward; Jenkins, Jon; DeVincenzi, D. (Technical Monitor)
1998-01-01
The detection of extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The technological challenges of direct imaging of Earth-size planets from space are expected to be resolved over the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). The method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. The concept, its capabilities and strengths are presented.
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
ERIC Educational Resources Information Center
Yaki, Akawo Angwal; Babagana, Mohammed
2016-01-01
The paper examined the effects of a Technological Instructional Package (TIP) on secondary school students' performance in biology. The study adopted a pre-test, post-test experimental control group design. The sample size of the study was 80 students from Minna metropolis, Niger state, Nigeria; the samples were randomly assigned into treatment…
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
An Integrated Tool for System Analysis of Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.
2012-01-01
The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.
A strategy for characterized aerosol-sampling transport efficiency.
NASA Astrophysics Data System (ADS)
Schwarz, J. P.
2017-12-01
A fundamental concern when sampling aerosol in the laboratory or in situ, on the ground or (especially) from aircraft, is characterizing transport losses due to particles contacting the walls of tubing used for transport. Depending on the size range of the aerosol, different mechanisms dominate these losses: diffusion for the ultra-fine, and inertial and gravitational settling losses for the coarse mode. In the coarse mode, losses become intractable very quickly with increasing particle size above 5 µm diameter. Here we present these issues, with a concept approach to reducing aerosol losses via strategic dilution with porous tubing including results of laboratory testing of a prototype. We infer the potential value of this approach to atmospheric aerosol sampling.
Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1
NASA Technical Reports Server (NTRS)
Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry;
2014-01-01
This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.
Measuring restriction sizes using diffusion weighted magnetic resonance imaging: a review.
Martin, Melanie
2013-01-01
This article reviews a new concept in magnetic resonance as applied to cellular and biological systems. Diffusion weighted magnetic resonance imaging can be used to infer information about restriction sizes of samples being measured. The measurements rely on the apparent diffusion coefficient changing with diffusion times as measurements move from restricted to free diffusion regimes. Pulsed gradient spin echo (PGSE) measurements are limited in the ability to shorten diffusion times and thus are limited in restriction sizes which can be probed. Oscillating gradient spin echo (OGSE) measurements could provide shorter diffusion times so smaller restriction sizes could be probed.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Polytomous Rasch Models in Counseling Assessment
ERIC Educational Resources Information Center
Willse, John T.
2017-01-01
This article provides a brief introduction to the Rasch model. Motivation for using Rasch analyses is provided. Important Rasch model concepts and key aspects of result interpretation are introduced, with major points reinforced using a simulation demonstration. Concrete guidelines are provided regarding sample size and the evaluation of items.
On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis
NASA Astrophysics Data System (ADS)
Yen, Tony Minghung
This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.
Longitudinal Model Predicting Self-Concept in Pediatric Chronic Illness.
Emerson, Natacha D; Morrell, Holly E R; Neece, Cameron; Tapanes, Daniel; Distelberg, Brian
2018-04-16
Although self-concept has been identified as salient to the psychosocial adjustment of adolescents dealing with a chronic illness (CI), little research has focused on its predictors it. Given that depression and parent-child attachment have been linked to self-concept in the population at large, the goal of this study was to evaluate these relationships longitudinally in a sample of adolescents with CI. Using participant data from the Mastering Each New Direction (MEND) program, a 3-month psychosocial, family based intensive outpatient program for adolescents with CI, we employed multilevel modeling to test longitudinal changes in self-concept, as predicted by depressive symptoms and parent-child attachment, in a sample of 50 youths (M age = 14.56, SD age = 1.82) participating in MEND. Both "time spent in the program" and decreases in depressive symptoms were associated with increases in self-concept over time. Higher baseline levels of avoidant attachment to both mother and father were also associated with greater initial levels of self-concept. Targeting depressive symptoms and supporting adaptive changes in attachment may be key to promoting a healthy self-concept in pediatric CI populations. The association between avoidant attachment and higher baseline self-concept scores may reflect differences in participants' autonomy, self-confidence, or depression. Limitations of the study include variability in the amount of time spent in the program, attrition in final time point measures, and the inability to fully examine and model all potential covariates due to a small sample size (e.g. power). © 2018 Family Process Institute.
Clinical decision making and the expected value of information.
Willan, Andrew R
2007-01-01
The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Reduction in bearing size due to superconductors in magnetic bearings
NASA Technical Reports Server (NTRS)
Rao, Dantam K.; Lewis, Paul; Dill, James F.
1991-01-01
A design concept that reduces the size of magnetic bearings is assessed. The small size will enable magnetic bearings to fit into limited available bearing volume of cryogenic machinery. The design concept, called SUPERC, uses (high Tc) superconductors or high-purity aluminum conductors in windings instead of copper. The relatively high-current density of these conductors reduces the slot radial thickness for windings, which reduces the size of the bearings. MTI developed a sizing program called SUPERC that translates the high-current density of these conductors into smaller sized bearings. This program was used to size a superconducting bearing to carry a 500 lb. load. The sizes of magnetic bearings needed by various design concepts are as follows: SUPERC design concept = 3.75 in.; magnet-bias design concept = 5.25 in.; and all electromagnet design concept = 7.0 in. These results indicate that the SUPERC design concept can significantly reduce the size of the bearing. This reduction, in turn, reduces the weight and yields a lighter bearing. Since the superconductors have inherently near-zero resistance, they are also expected to save power needed for operation considerably.
Particulate Removal Using a CO2 Composite Spray Cleaning System
NASA Technical Reports Server (NTRS)
Chen, Nicole; Lin, Ying; Jackson, David; Chung, Shirley
2016-01-01
The Planetary Protection surface cleanliness requirements for potential Mars Sample Return hardware that would come in contact with Martian samples may be stricter than previous missions. The Jet Propulsion Laboratory has developed a new technology that will enable us to remove sub-micron size particles from critical hardware surfaces. A hand-held CO2 composite cleaning system was tested to verify its cleaning capabilities. This convenient, portable device can be used in cleanrooms for cleaning after rework or during spacecraft integration and assembly. It is environmentally safe and easy to use. This cleaning concept has the potential to be further developed into a robotic cleaning device on a Mars Lander to be used to clean sample acquisition or sample handling devices in situ. Contaminants of known sizes and concentrations, such as fluorescent microspheres and spores were deposited on common spacecraft material surfaces. The cleaning efficiency results will be presented and discussed.
Using an R Shiny to Enhance the Learning Experience of Confidence Intervals
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2018-01-01
Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…
Physical perceptions and self-concept in athletes with muscle dysmorphia symptoms.
González-Martí, Irene; Fernández Bustos, Juan Gregorio; Hernández-Martínez, Andrea; Contreras Jordán, Onofre Ricardo
2014-01-01
Individuals affected by Muscle Dysmorphia (MD; body image disorder based on the sub estimation of muscle size), practice weightlifting in order to alleviate their muscular dissatisfaction. Although physical activity is associated with increased physical self-perception, we assume that this was not reproduced in full in people with MD. The study sample consisted of 734 weightlifters and bodybuilders, 562 men and 172 women, who completed the Escala de Satisfacción Muscular, the Physical Self-Concept Questionnaire, and from whom measures of body fat and Fat -Free Mass Index (FFMI) were obtained. The results showed that people suffering from MD symptoms, overall, have poorer physical self-concept perceptions (F = 18.46 - 34.77, p < .01).
Yetzer, Elizabeth A; Schandler, Steven; Root, Tammy L; Turnbaugh, Kathleen
2003-01-01
Spinal cord injury (SCI) requires considerable psychological adjustment to physical limitations and complications. One particularly severe complication of SCI is foot skin breakdown, which can result in lower limb amputation. Relative to SCI adjustment, amputation may produce one of two psychological outcomes: (a.) the fragile self-concept of a person with SCI may be reduced further by limb amputation, or (b.) amputation of a diseased, nonfunctional limb may be associated with restored health and improved self-concept. To better understand the effects of amputation, 26 males with SCI, 11 of whom had a lower limb amputation, were administered the Tennessee Self-Concept Scale (TCS) and the Personal Body Attractiveness Scale (PBAS). The study revealed that persons with SCI with amputation had higher Physical and Total self-concept scores on the TSCS, showing a slightly more positive self-concept. On the PBAS, although there were no significant differences in the scores for the legs, ankles, or feet, the persons with SCI with amputation had higher score on the Satisfaction subscale, indicating a slightly greater satisfaction with their thigh in their body image. Implications for future study include replication with larger sample sizes, inclusion of women in the sample, and a longitudinal study. Several nursing interventions are identified.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R
2016-03-07
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.
Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.
2016-01-01
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046
The Probability of Obtaining Two Statistically Different Test Scores as a Test Index
ERIC Educational Resources Information Center
Muller, Jorg M.
2006-01-01
A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…
ERIC Educational Resources Information Center
Stack, Sue; Watson, Jane
2013-01-01
There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…
NASA Technical Reports Server (NTRS)
Peters, Gregory; Brown, Kyle; Fuerstenau, Stephen
2009-01-01
The rollerjaw rock crusher melds the concepts of jaw crushing and roll crushing long employed in the mining and rock-crushing industries. Rollerjaw rock crushers have been proposed for inclusion in geological exploration missions on Mars, where they would be used to pulverize rock samples into powders in the tens of micrometer particle size range required for analysis by scientific instruments.
tscvh R Package: Computational of the two samples test on microarray-sequencing data
NASA Astrophysics Data System (ADS)
Fajriyah, Rohmatul; Rosadi, Dedi
2017-12-01
We present a new R package, a tscvh (two samples cross-variance homogeneity), as we called it. This package is a software of the cross-variance statistical test which has been proposed and introduced by Fajriyah ([3] and [4]), based on the cross-variance concept. The test can be used as an alternative test for the significance difference between two means when sample size is small, the situation which is usually appeared in the bioinformatics research. Based on its statistical distribution, the p-value can be also provided. The package is built under a homogeneity of variance between samples.
Method for determining damping properties of materials using a suspended mechanical oscillator
NASA Astrophysics Data System (ADS)
Biscans, S.; Gras, S.; Evans, M.; Fritschel, P.; Pezerat, C.; Picart, P.
2018-06-01
We present a new approach for characterizing the loss factor of materials, using a suspended mechanical oscillator. Compared to more standard techniques, this method offers freedom in terms of the size and shape of the tested samples. Using a finite element model and the vibration measurements, the loss factor is deduced from the oscillator's ring-down. In this way the loss factor can be estimated independently for shear and compression deformation of the sample over a range of frequencies. As a proof of concept, we present measurements for EPO-TEK 353ND epoxy samples.
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Anastasi, Robert F.; Madaras, Eric I.
2004-01-01
The Survivable, Affordable, Reparable Airframe Program (SARAP) will develop/produce new structural design concepts with lower structural weight, reduced manufacturing complexity and development time, increased readiness, and improved threat protection. These new structural concepts will require advanced field capable inspection technologies to help meet the SARAP structural objectives. In the area of repair, damage assessment using nondestructive inspection (NDI) is critical to identify repair location and size. The purpose of this work is to conduct an assessment of new and emerging NDI methods that can potentially satisfy the SARAP program goals.
Estimation of the vortex length scale and intensity from two-dimensional samples
NASA Technical Reports Server (NTRS)
Reuss, D. L.; Cheng, W. P.
1992-01-01
A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.
Lau, P W C; Lee, A; Ransdell, L; Yu, C W; Sung, R Y T
2004-02-01
To investigate whether the discrepancy between actual and ideal body size rating is related to Chinese children's global self-esteem and global physical self-concept. A cross-sectional study of school children who completed questionnaires related to global self-esteem, global physical self-concept, and actual vs ideal body size. A total of 386 Chinese children (44% girls and 56% boys) aged 7-13 y from a primary school in Hong Kong, China. Global self-esteem and physical self-concept were measured using the physical self-descriptive questionnaire. Actual vs ideal body size discrepancy was established using the silhouette matching task. No significant relationship was found between global self-esteem and actual-ideal body size discrepancy of children. Global physical self-concept had a moderate negative correlation (r=-0.12) with the body size discrepancy score and the discrepancy score explained very limited variance (R(2)=0.015; F(1, 296)=4.51; P<0.05) in global physical self-concept. Three body size discrepancy groups (none, positive, and negative) were examined to see if there were any significant differences in global self-esteem, global physical self-concept, and specific dimensions of physical self-concept. A significant overall difference was found between groups for global physical self-concept (F=3.73, P<0.05) and the physical self-concept subscales of physical activity (F=3.25, P<0.05), body fat (F=61.26, P<0.001), and strength (F=5.26, P<0.01). Boys scored significantly higher than girls on global physical self-concept-especially in the sport competence, strength, and endurance subscales. This study revealed that the actual-ideal body size discrepancy rating of Chinese children was not predictive of global physical self-concept and global self-esteem. These findings are contrary to those reported in Western children, which may mean that culture plays a role in the formation of body attitude.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
Elhanan, Gai; Ochs, Christopher; Mejino, Jose L V; Liu, Hao; Mungall, Christopher J; Perl, Yehoshua
2017-06-01
To examine whether disjoint partial-area taxonomy, a semantically-based evaluation methodology that has been successfully tested in SNOMED CT, will perform with similar effectiveness on Uberon, an anatomical ontology that belongs to a structurally similar family of ontologies as SNOMED CT. A disjoint partial-area taxonomy was generated for Uberon. One hundred randomly selected test concepts that overlap between partial-areas were matched to a same size control sample of non-overlapping concepts. The samples were blindly inspected for non-critical issues and presumptive errors first by a general domain expert whose results were then confirmed or rejected by a highly experienced anatomical ontology domain expert. Reported issues were subsequently reviewed by Uberon's curators. Overlapping concepts in Uberon's disjoint partial-area taxonomy exhibited a significantly higher rate of all issues. Clear-cut presumptive errors trended similarly but did not reach statistical significance. A sub-analysis of overlapping concepts with three or more relationship types indicated a much higher rate of issues. Overlapping concepts from Uberon's disjoint abstraction network are quite likely (up to 28.9%) to exhibit issues. The results suggest that the methodology can transfer well between same family ontologies. Although Uberon exhibited relatively few overlapping concepts, the methodology can be combined with other semantic indicators to expand the process to other concepts within the ontology that will generate high yields of discovered issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
Validating two questions in the Force Concept Inventory with subquestions
NASA Astrophysics Data System (ADS)
Yasuda, Jun-ichiro; Taniguchi, Masa-aki
2013-06-01
In this study, we evaluate the structural validity of Q.16 and Q.7 in the Force Concept Inventory (FCI). We address whether respondents who answer Q.16 and Q.7 correctly actually have an understanding of the concepts of physics tested in the questions. To examine respondents’ levels of understanding, we use subquestions that test them on concepts believed to be required to answer the actual FCI questions. Our sample size comprises 111 respondents; we derive false-positive ratios for prelearners and postlearners and then statistically test the difference between them. We find a difference at the 0.05 significance level for both Q.16 and Q.7, implying that it is possible for postlearners to answer both questions without an understanding of the concepts of physics tested in the questions; therefore, the structures of Q.16 and Q.7 are invalid. In this study, we only evaluate the validity of these two FCI questions; we do not assess the validity of previous studies that have compared total FCI scores.
Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy
2011-02-02
In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.
Monitoring Earth's Shortwave Reflectance: GEO Instrument Concept
NASA Technical Reports Server (NTRS)
Brageot, Emily; Mercury, Michael; Green, Robert; Mouroulis, Pantazis; Gerwe, David
2015-01-01
In this paper we present a GEO instrument concept dedicated to monitoring the Earth's global spectral reflectance with a high revisit rate. Based on our measurement goals, the ideal instrument needs to be highly sensitive (SNR greater than 100) and to achieve global coverage with spectral sampling (less than or equal to 10nm) and spatial sampling (less than or equal to 1km) over a large bandwidth (380-2510 nm) with a revisit time (greater than or equal to greater than or equal to 3x/day) sufficient to fully measure the spectral-radiometric-spatial evolution of clouds and confounding factor during daytime. After a brief study of existing instruments and their capabilities, we choose to use a GEO constellation of up to 6 satellites as a platform for this instrument concept in order to achieve the revisit time requirement with a single launch. We derive the main parameters of the instrument and show the above requirements can be fulfilled while retaining an instrument architecture as compact as possible by controlling the telescope aperture size and using a passively cooled detector.
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
Statistical issues in quality control of proteomic analyses: good experimental design and planning.
Cairns, David A
2011-03-01
Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
ERIC Educational Resources Information Center
Eymur, Guluzar; Çetin, Pinar; Geban, Ömer
2013-01-01
The purpose of this study was to analyze and compare the alternative conceptions of high school students and preservice teachers on the concept of atomic size. The Atomic Size Diagnostic Instrument was developed; it is composed of eight, two-tier multiple-choice items. The results of the study showed that as a whole 56.2% of preservice teachers…
[H2O ortho-para spin conversion in aqueous solutions as a quantum factor of Konovalov paradox].
Pershin, S M
2014-01-01
Recently academician Konovalov and co-workers observed an increase in electroconductivity and biological activity simultaneously with diffusion slowing (or nanoobject diameter increasing) and extremes of other parameters (ζ-potential, surface tension, pH, optical activity) in low concentration aqueous solutions. This phenomenon completely disappeared when samples were shielded against external electromagnetic fields by a Faraday cage. A conventional theory of water and water solutions couldn't explain "Konovalov paradox" observed in numerous experiments (representative sampling about 60 samples and 7 parameters). The new approach was suggested to describe the physics of water and explain "Konovalov paradox". The proposed concept takes into account the quantum differences of ortho-para spin isomers of H2O in bulk water (rotational spin-selectivity upon hydration and spontaneous formation of ice-like structures, quantum beats and spin conversion induced in the presence of a resonant electromagnetic radiation). A size-dependent self-assembly of amorphous complexes of H2O molecules more than 275 leading to the ice Ih structure observed in the previous experiments supports this concept.
Microwave Nondestructive Evaluation of Dielectric Materials with a Metamaterial Lens
NASA Technical Reports Server (NTRS)
Shreiber, Daniel; Gupta, Mool; Cravey, Robin L.
2008-01-01
A novel microwave Nondestructive Evaluation (NDE) sensor was developed in an attempt to increase the sensitivity of the microwave NDE method for detection of defects small relative to a wavelength. The sensor was designed on the basis of a negative index material (NIM) lens. Characterization of the lens was performed to determine its resonant frequency, index of refraction, focus spot size, and optimal focusing length (for proper sample location). A sub-wavelength spot size (3 dB) of 0.48 lambda was obtained. The proof of concept for the sensor was achieved when a fiberglass sample with a 3 mm diameter through hole (perpendicular to the propagation direction of the wave) was tested. The hole was successfully detected with an 8.2 cm wavelength electromagnetic wave. This method is able to detect a defect that is 0.037 lambda. This method has certain advantages over other far field and near field microwave NDE methods currently in use.
NASA Astrophysics Data System (ADS)
Schulte, Wolfgang; Hofer, Stefan; Hofmann, Peter; Thiele, Hans; von Heise-Rotenburg, Ralf; Toporski, Jan; Rettberg, Petra
2007-06-01
For more than a decade Kayser-Threde, a medium-sized enterprise of the German space industry, has been involved in astrobiology research in partnership with a variety of scientific institutes from all over Europe. Previous projects include exobiology research platforms in low Earth orbit on retrievable carriers and onboard the Space Station. More recently, exobiology payloads for in situ experimentation on Mars have been studied by Kayser-Threde under ESA contracts, specifically the ExoMars Pasteur Payload. These studies included work on a sample preparation and distribution systems for Martian rock/regolith samples, instrument concepts such as Raman spectroscopy and a Life Marker Chip, advanced microscope systems as well as robotic tools for astrobiology missions. The status of the funded technical studies and major results are presented. The reported industrial work was funded by ESA and the German Aerospace Center (DLR).
Teachers' Concepts of Spatial Scale: An international comparison
NASA Astrophysics Data System (ADS)
Jones, M. Gail; Paechter, Manuela; Yen, Chiung-Fen; Gardner, Grant; Taylor, Amy; Tretter, Thomas
2013-09-01
Metric scale is an important concept taught as part of science curricula across different countries. This study explored metric and relative (body-length) scale concepts of inservice (N = 92) and preservice (N = 134) teachers from Austria, and Taiwan, and their concepts were compared with those of teachers from the USA. Participants completed three assessments: the Scale Anchoring Objects (SAO), Scale of Objects Questionnaire (SOQ), and a subsample of participants were interviewed with the Learning Scale Interview. A Rasch analysis was conducted with the SAO and SOQ and results showed that the Rasch model held for these assessments, indicating that there is an underlying common dimension to understanding scale. Further analyses showed that accuracy of knowledge of scale measured by the SAO and SOQ was not related to professional experience. There were significant differences in teachers' accuracy of scale concepts by nationality. This was true for both metric and body-length SAO assessments. Post hoc comparisons showed that the Austrian and Taiwanese participants were significantly more accurate than the US sample on the SAO and SOQ. The Austrian participants scored significantly higher than the US and the Taiwanese participants. The results of the interviews showed that the Taiwanese experienced teacher participants were more likely to report learning size and scale through in-school experiences than the Austrian or the US participants. US teachers reported learning size and scale most often through participating in hobbies and sports, Taiwanese teachers reported learning scale through sports and reading, and Austrian teachers most often noted that they learned about scale through travel.
A Comparative Study of Hawaii Middle School Science Student Academic Achievement
NASA Astrophysics Data System (ADS)
Askew Cain, Peggy
The problem was middle-grade students with specific learning disabilities (SWDs) in reading comprehension perform less well than their peers on standardized assessments. The purpose of this quantitative comparative study was to examine the effect of electronic concept maps on reading comprehension of eighth grade students with SWD reading comprehension in a Hawaii middle school Grade 8 science class on the island of Oahu. The target population consisted of Grade 8 science students for school year 2015-2016. The sampling method was a purposeful sampling with a final sample size of 338 grade 8 science students. De-identified archival records of grade 8 Hawaii standardized science test scores were analyzed using a one way analysis of variance (ANOVA) in SPSS. The finding for hypothesis 1 indicated a significant difference in student achievement between SWDs and SWODs as measured by Hawaii State Assessment (HSA) in science scores (p < 0.05), and for hypothesis 2, a significant difference in instructional modality for SWDs who used concept maps and does who did not as measured by the Hawaii State Assessment in science (p < 0.05). The implications of the findings (a) SWDs performed less well in science achievement than their peers and consequently, and (b) SWODs appeared to remember greater degrees of science knowledge, and answered more questions correctly than SWDs as a result of reading comprehension. Recommendations for practice were for educational leadership and noted: (a) teachers should practice using concept maps with SWDs as a specific reading strategy to support reading comprehension in science classes, (b) involve a strong focus on vocabulary building and concept building during concept map construction because the construction of concept maps sometimes requires frontloading of vocabulary, and (c) model for teachers how concept maps are created and to explain their educational purpose as a tool for learning. Recommendations for future research were to conduct (a) a quantitative comparative study between groups for academic achievement of subtests mean scores of SWDs and SWODs in physical science, earth science, and space science, and (b) a quantitative correlation study to examine relationships and predictive values for academic achievement of SWDs and concept map integration on standardized science assessments.
A basic introduction to statistics for the orthopaedic surgeon.
Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef
2012-02-01
Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.
2011-01-01
Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326
Sizing and Lifecycle Cost Analysis of an Ares V Composite Interstage
NASA Technical Reports Server (NTRS)
Mann, Troy; Smeltzer, Stan; Grenoble, Ray; Mason, Brian; Rosario, Sev; Fairbairn, Bob
2012-01-01
The Interstage Element of the Ares V launch vehicle was sized using a commercially available structural sizing software tool. Two different concepts were considered, a metallic design and a composite design. Both concepts were sized using similar levels of analysis fidelity and included the influence of design details on each concept. Additionally, the impact of the different manufacturing techniques and failure mechanisms for composite and metallic construction were considered. Significant details were included in analysis models of each concept, including penetrations for human access, joint connections, as well as secondary loading effects. The designs and results of the analysis were used to determine lifecycle cost estimates for the two Interstage designs. Lifecycle cost estimates were based on industry provided cost data for similar launch vehicle components. The results indicated that significant mass as well as cost savings are attainable for the chosen composite concept as compared with a metallic option.
A Bayesian paradigm for decision-making in proof-of-concept trials.
Pulkstenis, Erik; Patra, Kaushik; Zhang, Jianliang
2017-01-01
Decision-making is central to every phase of drug development, and especially at the proof of concept stage where risk and evidence must be weighed carefully, often in the presence of significant uncertainty. The decision to proceed or not to large expensive Phase 3 trials has significant implications to both patients and sponsors alike. Recent experience has shown that Phase 3 failure rates remain high. We present a flexible Bayesian quantitative decision-making paradigm that evaluates evidence relative to achieving a multilevel target product profile. A framework for operating characteristics is provided that allows the drug developer to design a proof-of-concept trial in light of its ability to support decision-making rather than merely achieve statistical significance. Operating characteristics are shown to be superior to traditional p-value-based methods. In addition, discussion related to sample size considerations, application to interim futility analysis and incorporation of prior historical information is evaluated.
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
MSFC Advanced Concepts Office and the Iterative Launch Vehicle Concept Method
NASA Technical Reports Server (NTRS)
Creech, Dennis
2011-01-01
This slide presentation reviews the work of the Advanced Concepts Office (ACO) at Marshall Space Flight Center (MSFC) with particular emphasis on the method used to model launch vehicles using INTegrated ROcket Sizing (INTROS), a modeling system that assists in establishing the launch concept design, and stage sizing, and facilitates the integration of exterior analytic efforts, vehicle architecture studies, and technology and system trades and parameter sensitivities.
The Kepler Mission: Search for Habitable Planets
NASA Technical Reports Server (NTRS)
Borucki, William; Likins, B.; DeVincenzi, Donald L. (Technical Monitor)
1998-01-01
Detecting extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The difficulties encountered with direct imaging of Earth-size planets from space are expected to be resolved in the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). This method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. Its capabilities and strengths are presented.
Factors affecting the size of ovulatory follicles and conception rate in high-yielding dairy cows.
Mokhtari, A; Kafi, M; Zamiri, M J; Akbari, R
2016-03-01
Two studies were designed to determine (1) the effects of Heatsynch and Ovsynch protocols versus spontaneous ovulation and (2) the effects of calving problems, clinical uterine infections, and clinical mastitis on the size of the ovulatory follicle, conception rate, and embryonic/fetal (E/F) death in high-yielding dairy cows. In study 1, cows without the history of calving problems, clinical uterine infections, and clinical mastitis were randomly allocated to either an Ovsynch (n = 45) or Heatsynch (n = 39) ovulation synchronization protocol or spontaneous ovulation (n = 43) groups. Blood samples were collected on the day of artificial insemination (AI) to measure progesterone (P4), estradiol-17β, and insulin-like growth factor 1 (IGF-1) and 7 days later to measure P4. Study 2 consisted of cows (n = 351) with or without the history of calving problems, clinical uterine infections, and clinical mastitis which were artificially inseminated after a 55-day voluntary waiting period. Transrectal ultrasonography was performed at the time of AI to measure the ovulatory follicle size and on Days 30 and 68 after AI to diagnose pregnancy in both studies. In study 1, the mean (±standard error of the mean) diameter of the ovulatory follicle was greater (P = 0.0005) and E/F mortality was lower (P = 0.007) for the spontaneous ovulation group compared with Ovsynch and Heatsynch groups. Serum concentration of P4 on Day 7 after AI was correlated with the size of the ovulatory follicle (P = 0.007). Conception rate at Days 30 and 68 was not significantly different between the three experimental groups in study 1. Cows with serum IGF-1 concentrations greater than 55 ng/mL at AI had significantly higher Day 68 conception rate (50% vs. 24%) and lower E/F death (16.6% vs. 40%) compared to cows with serum IGF-1 concentrations lower than 56 ng/mL at AI. The conception rate on Days 30 and 68 for follicles of 10 to 14 mm in diameter (34% and 21.8%) was significantly lower than follicles of 14.1 to 19 mm in diameter (60% and 50%), respectively (P < 0.05). In study 2, the ovulatory follicle in cows with clinical uterine infections was smaller than that in cows without clinical uterine infections (16.4 vs. 17.1 mm; P = 0.04). In conclusion, the size of the ovulatory follicle is affected by ovulation synchronizing protocols and postpartum clinical uterine infections. In addition, cows with higher serum IGF-1 concentrations on the day of AI had higher Day 68 conception rate and lower E/F death. Copyright © 2016 Elsevier Inc. All rights reserved.
Round-Trip Solar Electric Propulsion Missions for Mars Sample Return
NASA Technical Reports Server (NTRS)
Bailey, Zachary J.; Sturm, Erick J.; Kowalkowski, Theresa D.; Lock, Robert E.; Woolley, Ryan C.; Nicholas, Austin K.
2014-01-01
Mars Sample Return (MSR) missions could benefit from the high specific impulse of Solar Electric Propulsion (SEP) to achieve lower launch masses than with chemical propulsion. SEP presents formulation challenges due to the coupled nature of launch vehicle performance, propulsion system, power system, and mission timeline. This paper describes a SEP orbiter-sizing tool, which models spacecraft mass & timeline in conjunction with low thrust round-trip Earth-Mars trajectories, and presents selected concept designs. A variety of system designs are possible for SEP MSR orbiters, with large dry mass allocations, similar round-trip durations to chemical orbiters, and reduced design variability between opportunities.
NASA Synthetic Vision EGE Flight Test
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J.; Kramer, Lynda J.; Comstock, J. Raymond; Bailey, Randall E.; Hughes, Monica F.; Parrish, Russell V.
2002-01-01
NASA Langley Research Center conducted flight tests at the Eagle County, Colorado airport to evaluate synthetic vision concepts. Three display concepts (size 'A' head-down, size 'X' head-down, and head-up displays) and two texture concepts (photo, generic) were assessed for situation awareness and flight technical error / performance while making approaches to Runway 25 and Runway 07 and simulated engine-out Cottonwood 2 and KREMM departures. The results of the study confirm the retrofit capability of the HUD and Size 'A' SVS concepts to significantly improve situation awareness and performance over current EFIS glass and non-glass instruments for difficult approaches in terrain-challenged environments.
MIXI: Mobile Intelligent X-Ray Inspection System
NASA Astrophysics Data System (ADS)
Arodzero, Anatoli; Boucher, Salime; Kutsaev, Sergey V.; Ziskin, Vitaliy
2017-07-01
A novel, low-dose Mobile Intelligent X-ray Inspection (MIXI) concept is being developed at RadiaBeam Technologies. The MIXI concept relies on a linac-based, adaptive, ramped energy source of short X-ray packets of pulses, a new type of fast X-ray detector, rapid processing of detector signals for intelligent control of the linac, and advanced radiography image processing. The key parameters for this system include: better than 3 mm line pair resolution; penetration greater than 320 mm of steel equivalent; scan speed with 100% image sampling rate of up to 15 km/h; and material discrimination over a range of thicknesses up to 200 mm of steel equivalent. Its minimal radiation dose, size and weight allow MIXI to be placed on a lightweight truck chassis.
Flow Cytometry Sorting to Separate Viable Giant Viruses from Amoeba Co-culture Supernatants
Khalil, Jacques Y. B.; Langlois, Thierry; Andreani, Julien; Sorraing, Jean-Marc; Raoult, Didier; Camoin, Laurence; La Scola, Bernard
2017-01-01
Flow cytometry has contributed to virology but has faced many drawbacks concerning detection limits, due to the small size of viral particles. Nonetheless, giant viruses changed many concepts in the world of viruses, as a result of their size and hence opened up the possibility of using flow cytometry to study them. Recently, we developed a high throughput isolation of viruses using flow cytometry and protozoa co-culture. Consequently, isolating a viral mixture in the same sample became more common. Nevertheless, when one virus multiplies faster than others in the mixture, it is impossible to obtain a pure culture of the minority population. Here, we describe a robust sorting system, which can separate viable giant virus mixtures from supernatants. We tested three flow cytometry sorters by sorting artificial mixtures. Purity control was assessed by electron microscopy and molecular biology. As proof of concept, we applied the sorting system to a co-culture supernatant taken from a sample containing a viral mixture that we couldn't separate using end point dilution. In addition to isolating the quick-growing Mimivirus, we sorted and re-cultured a new, slow-growing virus, which we named “Cedratvirus.” The sorting assay presented in this paper is a powerful and versatile tool for separating viral populations from amoeba co-cultures and adding value to the new field of flow virometry. PMID:28111619
Flow Cytometry Sorting to Separate Viable Giant Viruses from Amoeba Co-culture Supernatants.
Khalil, Jacques Y B; Langlois, Thierry; Andreani, Julien; Sorraing, Jean-Marc; Raoult, Didier; Camoin, Laurence; La Scola, Bernard
2016-01-01
Flow cytometry has contributed to virology but has faced many drawbacks concerning detection limits, due to the small size of viral particles. Nonetheless, giant viruses changed many concepts in the world of viruses, as a result of their size and hence opened up the possibility of using flow cytometry to study them. Recently, we developed a high throughput isolation of viruses using flow cytometry and protozoa co-culture. Consequently, isolating a viral mixture in the same sample became more common. Nevertheless, when one virus multiplies faster than others in the mixture, it is impossible to obtain a pure culture of the minority population. Here, we describe a robust sorting system, which can separate viable giant virus mixtures from supernatants. We tested three flow cytometry sorters by sorting artificial mixtures. Purity control was assessed by electron microscopy and molecular biology. As proof of concept, we applied the sorting system to a co-culture supernatant taken from a sample containing a viral mixture that we couldn't separate using end point dilution. In addition to isolating the quick-growing Mimivirus , we sorted and re-cultured a new, slow-growing virus, which we named "Cedratvirus." The sorting assay presented in this paper is a powerful and versatile tool for separating viral populations from amoeba co-cultures and adding value to the new field of flow virometry.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Modular space station phase B extension preliminary system design. Volume 7: Ancillary studies
NASA Technical Reports Server (NTRS)
Jones, A. L.
1972-01-01
Sortie mission analysis and reduced payloads size impact studies are presented. In the sortie mission analysis, a modular space station oriented experiment program to be flown by the space shuttle during the period prior to space station IOC is discussed. Experiments are grouped into experiment packages. Mission payloads are derived by grouping experiment packages and by adding support subsystems and structure. The operational and subsystems analyses of these payloads are described. Requirements, concepts, and shuttle interfaces are integrated. The sortie module/station module commonality and a sortie laboratory concept are described. In the payloads size analysis, the effect on the modular space station concept of reduced diameter and reduced length of the shuttle cargo bay is discussed. Design concepts are presented for reduced sizes of 12 by 60 ft, 14 by 40 ft, and 12 by 40 ft. Comparisons of these concepts with the modular station (14 by 60 ft) are made to show the impact of payload size changes.
Gauging an Alien World Size Artist Concept
2014-07-23
Using data from NASA Kepler and Spitzer Space Telescopes, scientists have made the most precise measurement ever of the size of a world outside our solar system, as illustrated in this artist conception.
Interpretation of correlations in clinical research.
Hung, Man; Bounsanga, Jerry; Voss, Maren Wright
2017-11-01
Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
ERIC Educational Resources Information Center
Magana, Alejandra; Newby, Timothy; Brophy, Sean
2012-01-01
Education in nanotechnology presents major challenges in science literacy. One of these challenges relates to conveying size and scale-related concepts. Because of the potential difficulties in conveying concepts and ideas that are not visible to the naked eye, multimedia for learning could be an appropriate vehicle to deliver curricular materials…
Kato, Tsukasa
2016-04-30
Psychological inflexibility is a core concept in Acceptance and Commitment Therapy. The primary aim of this study was to examine psychological inflexibility and depressive symptoms among Asian English speakers. A total of 900 adults in India, the Philippines, and Singapore completed some measures related to psychological inflexibility and depressive symptoms through a Web-based survey. Multiple regression analyses revealed that higher psychological inflexibility was significantly associated with higher levels of depressive symptoms in all the samples, after controlling for the effects of gender, marital status, and interpersonal stress. In addition, the effect sizes of the changes in the R(2) values when only psychological flexibility scores were entered in the regression model were large for all the samples. Moreover, overall, the beta-weight of the psychological flexibility scores obtained by the Philippine sample was the lowest of all three samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ueji, R.; Tsuchida, N.; Harada, K.; Takaki, K.; Fujii, H.
2015-08-01
The grain size effect on the deformation twinning in a high manganese austenitic steel which is so-called TWIP (twining induced plastic deformation) steel was studied in order to understand how to control deformation twinning. The 31wt%Mn-3%Al-3% Si steel was cold rolled and annealed at various temperatures to obtain fully recrystallized structures with different mean grain sizes. These annealed sheets were examined by room temperature tensile tests at a strain rate of 10-4/s. The coarse grained sample (grain size: 49.6μm) showed many deformation twins and the deformation twinning was preferentially found in the grains in which the tensile axis is parallel near to [111]. On the other hand, the sample with finer grains (1.8 μm) had few grains with twinning even after the tensile deformation. The electron back scattering diffraction (EB SD) measurements clarified the relationship between the anisotropy of deformation twinning and that of inhomogeneous plastic deformation. Based on the EBSD analysis, the mechanism of the suppression of deformation twinning by grain refinement was discussed with the concept of the slip pattern competition between the slip system governed by a grain boundary and that activated by the macroscopic load.
Assessing learning in small sized physics courses
NASA Astrophysics Data System (ADS)
Ene, Emanuela; Ackerson, Bruce J.
2018-01-01
We describe the construction, validation, and testing of a concept inventory for an Introduction to Physics of Semiconductors course offered by the department of physics to undergraduate engineering students. By design, this inventory addresses both content knowledge and the ability to interpret content via different cognitive processes outlined in Bloom's revised taxonomy. The primary challenge comes from the low number of test takers. We describe the Rasch modeling analysis for this concept inventory, and the results of the calibration on a small sample size, with the intention of providing a useful blueprint to other instructors. Our study involved 101 students from Oklahoma State University and fourteen faculty teaching or doing research in the field of semiconductors at seven universities. The items were written in four-option multiple-choice format. It was possible to calibrate a 30-item unidimensional scale precisely enough to characterize the student population enrolled each semester and, therefore, to allow the tailoring of the learning activities of each class. We show that this scale can be employed as an item bank from which instructors could extract short testlets and where we can add new items fitting the existing calibration.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
In Situ Solid Particle Generator
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2013-01-01
Particle seeding is a key diagnostic component of filter testing and flow imaging techniques. Typical particle generators rely on pressurized air or gas sources to propel the particles into the flow field. Other techniques involve liquid droplet atomizers. These conventional techniques have drawbacks that include challenging access to the flow field, flow and pressure disturbances to the investigated flow, and they are prohibitive in high-temperature, non-standard, extreme, and closed-system flow conditions and environments. In this concept, the particles are supplied directly within a flow environment. A particle sample cartridge containing the particles is positioned somewhere inside the flow field. The particles are ejected into the flow by mechanical brush/wiper feeding and sieving that takes place within the cartridge chamber. Some aspects of this concept are based on established material handling techniques, but they have not been used previously in the current configuration, in combination with flow seeding concepts, and in the current operational mode. Unlike other particle generation methods, this concept has control over the particle size range ejected, breaks up agglomerates, and is gravity-independent. This makes this device useful for testing in microgravity environments.
Joy and happiness: a simultaneous and evolutionary concept analysis.
Cottrell, Laura
2016-07-01
To report a simultaneous and evolutionary analysis of the concepts of joy and long-term happiness. Joy and happiness are underrepresented in the nursing literature, though negative concepts are well represented. When mentioned in the literature, neither joy nor happiness is adequately defined, explained, or clearly understood. To promote further investigation of these concepts in nursing and to explore their relationship with health and healing, conceptual clarity is an essential first step. Concept analysis. The following databases were searched, without time restrictions, for articles in English: Academic Search Complete, Anthropology Plus; ATLA Religious Database with ATLASerials; Cumulative Index of Nursing and Allied Health Literature (CINAHL); Education Research Complete; Humanities International Complete; Psych EXTRA; and SocINDEX with Full Text. The final sample size consists of 61 articles and one book, published between 1978-2014. An adapted combination of Rodgers' Evolutionary Model and Haase et al.'s Simultaneous Concept Analysis (SCA) method. Though both are positive concepts, joy and happiness have significant differences. Attributes of joy describe a spontaneous, sudden and transient concept associated with connection, awareness, and freedom. Attributes of happiness describe a pursued, long-lasting, stable mental state associated with virtue and self-control. Further exploration of joy and happiness is necessary to ascertain their relationship with health and their value to nursing practice and theory development. Nurses are encouraged to consider the value of positive concepts to all areas of nursing. © 2016 John Wiley & Sons Ltd.
Further statistics in dentistry. Part 4: Clinical trials 2.
Petrie, A; Bulman, J S; Osborn, J F
2002-11-23
The principles which underlie a well-designed clinical trial were introduced in a previous paper. The trial should be controlled (to ensure that the appropriate comparisons are made), randomised (to avoid allocation bias) and, preferably, blinded (to obviate assessment bias). However, taken in isolation, these concepts will not necessarily ensure that meaningful conclusions can be drawn from the study. It is essential that the sample size is large enough to enable the effects of interest to be estimated precisely, and to detect any real treatment differences.
2016-09-01
par. 4) Based on a RED projected size of 22.16 m, a sample calculation for the unadjusted single shot probability of kill for HELLFIRE missiles is...framework based on intelligent objects (SIMIO) environment to model a fast attack craft/fast inshore attack craft anti-surface warfare expanded kill chain...concept of operation efficiency. Based on the operational environment, low cost and less capable unmanned aircraft provide an alternative to the
Isolating magnetic moments from individual grains within a magnetic assemblage
NASA Astrophysics Data System (ADS)
Béguin, A.; Fabian, K.; Jansen, C.; Lascu, I.; Harrison, R.; Barnhoorn, A.; de Groot, L. V.
2017-12-01
Methods to derive paleodirections or paleointensities from rocks currently rely on measurements of bulk samples (typically 10 cc). The process of recording and storing magnetizations as function of temperature, however, differs for grains of various sizes and chemical compositions. Most rocks, by their mere nature, consist of assemblages of grains varying in size, shape, and chemistry. Unraveling the behavior of individual grains is a holy grail in fundamental rock magnetism. Recently, we showed that it is possible to obtain plausible magnetic moments for individual grains in a synthetic sample by a micromagnetic tomography (MMT) technique. We use a least-squares inversion to obtain these magnetic moments based on the physical locations and dimensions of the grains obtained from a MicroCT scanner and a magnetic flux density map of the surface of the sample. The sample used for this proof of concept, however, was optimized for success: it had a low dispersion of the grains, and the grains were large enough so they were easily detected by the MicroCT scanner. Natural lavas are much more complex than the synthetic sample analyzed so far: the dispersion of the magnetic markers is one order of magnitude higher, the grains differ more in composition and size, and many small (submicron) magnetic markers may be present that go undetected by the MicroCT scanner. Here we present the first results derived from a natural volcanic sample from the 1907-flow at Hawaii. To analyze the magnetic flux at the surface of the sample at room temperature, we used the Magnetic Tunneling Junction (MTJ) technique. We were able to successfully obtain MicroCT and MTJ scans from the sample and isolate plausible magnetic moments for individual grains in the top 70 µm of the sample. We discuss the potential of the MMT technique applied to natural samples and compare the MTJ and SSM methods in terms of work flow and quality of the results.
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra
2015-01-01
The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...
2015-08-19
We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
In-Situ Resource Utilization Experiment for the Asteroid Redirect Crewed Mission
NASA Astrophysics Data System (ADS)
Elliott, J.; Fries, M.; Love, S.; Sellar, R. G.; Voecks, G.; Wilson, D.
2015-10-01
The Asteroid Redirect Crewed Mission (ARCM) represents a unique opportunity to perform in-situ testing of concepts that could lead to full-scale exploitation of asteroids for their valuable resources [1]. This paper describes a concept for an astronautoperated "suitcase" experiment to would demonstrate asteroid volatile extraction using a solar-heated oven and integral cold trap in a configuration scalable to full-size asteroids. Conversion of liberated water into H2 and O2 products would also be demonstrated through an integral processing and storage unit. The plan also includes development of a local prospecting system consisting of a suit-mounted multi-spectral imager to aid the crew in choosing optimal samples, both for In-Situ Resource Utilization (ISRU) and for potential return to Earth.
Inverted Outflow Ground Testing of Cryogenic Propellant Liquid Acquisition Devices
NASA Technical Reports Server (NTRS)
Chato, David J.; Hartwig, Jason W.; Rame, Enrique; McQuillen, John B.
2014-01-01
NASA is currently developing propulsion system concepts for human exploration. These propulsion concepts will require the vapor free acquisition and delivery of the cryogenic propellants stored in the propulsion tanks during periods of microgravity to the exploration vehicles engines. Propellant management devices (PMDs), such as screen channel capillary liquid acquisition devices (LADs), vanes and sponges have been used for earth storable propellants in the Space Shuttle Orbiter and other spacecraft propulsion systems, but only very limited propellant management capability currently exists for cryogenic propellants. NASA is developing PMD technology as a part of their cryogenic fluid management (CFM) project. System concept studies have looked at the key factors that dictate the size and shape of PMD devices and established screen channel LADs as an important component of PMD design. Modeling validated by normal gravity experiments is examining the behavior of the flow in the LAD channel assemblies (as opposed to only prior testing of screen samples) at the flow rates representative of actual engine service (similar in size to current launch vehicle upper stage engines). Recently testing of rectangular LAD channels has included inverted outflow in liquid oxygen and liquid hydrogen. This paper will report the results of liquid oxygen testing compare and contrast them with the recently published hydrogen results; and identify the sensitivity these results to flow rate and tank internal pressure.
Inverted Outflow Ground Testing of Cryogenic Propellant Liquid Acquisition Devices
NASA Technical Reports Server (NTRS)
Chato, David J.; Hartwig, Jason W.; Rame, Enrique; McQuillen, John B.
2014-01-01
NASA is currently developing propulsion system concepts for human exploration. These propulsion concepts will require the vapor free acquisition and delivery of the cryogenic propellants stored in the propulsion tanks during periods of microgravity to the exploration vehicles engines. Propellant management devices (PMD's), such as screen channel capillary liquid acquisition devices (LAD's), vanes and sponges have been used for earth storable propellants in the Space Shuttle Orbiter and other spacecraft propulsion systems, but only very limited propellant management capability currently exists for cryogenic propellants. NASA is developing PMD technology as a part of their cryogenic fluid management (CFM) project. System concept studies have looked at the key factors that dictate the size and shape of PMD devices and established screen channel LADs as an important component of PMD design. Modeling validated by normal gravity experiments is examining the behavior of the flow in the LAD channel assemblies (as opposed to only prior testing of screen samples) at the flow rates representative of actual engine service (similar in size to current launch vehicle upper stage engines). Recently testing of rectangular LAD channels has included inverted outflow in liquid oxygen and liquid hydrogen. This paper will report the results of liquid oxygen testing compare and contrast them with the recently published hydrogen results; and identify the sensitivity of these results to flow rate and tank internal pressure.
Efficient Design and Analysis of Lightweight Reinforced Core Sandwich and PRSEUS Structures
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Yarrington, Phillip W.; Lucking, Ryan C.; Collier, Craig S.; Ainsworth, James J.; Toubia, Elias A.
2012-01-01
Design, analysis, and sizing methods for two novel structural panel concepts have been developed and incorporated into the HyperSizer Structural Sizing Software. Reinforced Core Sandwich (RCS) panels consist of a foam core with reinforcing composite webs connecting composite facesheets. Boeing s Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) panels use a pultruded unidirectional composite rod to provide axial stiffness along with integrated transverse frames and stitching. Both of these structural concepts are ovencured and have shown great promise applications in lightweight structures, but have suffered from the lack of efficient sizing capabilities similar to those that exist for honeycomb sandwich, foam sandwich, hat stiffened, and other, more traditional concepts. Now, with accurate design methods for RCS and PRSEUS panels available in HyperSizer, these concepts can be traded and used in designs as is done with the more traditional structural concepts. The methods developed to enable sizing of RCS and PRSEUS are outlined, as are results showing the validity and utility of the methods. Applications include several large NASA heavy lift launch vehicle structures.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Shemesh, Noam; Ozarslan, Evren; Basser, Peter J; Cohen, Yoram
2010-01-21
NMR observable nuclei undergoing restricted diffusion within confining pores are important reporters for microstructural features of porous media including, inter-alia, biological tissues, emulsions and rocks. Diffusion NMR, and especially the single-pulsed field gradient (s-PFG) methodology, is one of the most important noninvasive tools for studying such opaque samples, enabling extraction of important microstructural information from diffusion-diffraction phenomena. However, when the pores are not monodisperse and are characterized by a size distribution, the diffusion-diffraction patterns disappear from the signal decay, and the relevant microstructural information is mostly lost. A recent theoretical study predicted that the diffusion-diffraction patterns in double-PFG (d-PFG) experiments have unique characteristics, such as zero-crossings, that make them more robust with respect to size distributions. In this study, we theoretically compared the signal decay arising from diffusion in isolated cylindrical pores characterized by lognormal size distributions in both s-PFG and d-PFG methodologies using a recently presented general framework for treating diffusion in NMR experiments. We showed the gradual loss of diffusion-diffraction patterns in broadening size distributions in s-PFG and the robustness of the zero-crossings in d-PFG even for very large standard deviations of the size distribution. We then performed s-PFG and d-PFG experiments on well-controlled size distribution phantoms in which the ground-truth is well-known a priori. We showed that the microstructural information, as manifested in the diffusion-diffraction patterns, is lost in the s-PFG experiments, whereas in d-PFG experiments the zero-crossings of the signal persist from which relevant microstructural information can be extracted. This study provides a proof of concept that d-PFG may be useful in obtaining important microstructural features in samples characterized by size distributions.
Systems evaluation of thermal bus concepts
NASA Technical Reports Server (NTRS)
Stalmach, D. D.
1982-01-01
Thermal bus concepts, to provide a centralized thermal utility for large, multihundred kilowatt space platforms, were studied and the results are summarized. Concepts were generated, defined, and screened for inclusion in system level thermal bus trades. Parametric trade studies were conducted in order to define the operational envelope, performance, and physical characteristics of each. Two concepts were selected as offering the most promise for thermal bus development. All of four concepts involved two phase flow in order to meet the required isothermal nature of the thermal bus. Two of the concepts employ a mechanical means to circulate the working fluid, a liquid pump in one case and a vapor compressor in another. Another concept utilizes direct osmosis as the driving force of the thermal bus. The fourth concept was a high capacity monogroove heat pipe. After preliminary sizing and screening, three of these concepts were selected to carry into the trade studies. The monogroove heat pipe concept was deemed unsuitable for further consideration because of its heat transport limitations. One additional concept utilizing capillary forces to drive the working fluid was added. Parametric system level trade studies were performed. Sizing and weight calculations were performed for thermal bus sizes ranging from 5 to 350 kW and operating temperatures in the range of 4 to 120 C. System level considerations such as heat rejection and electrical power penalties and interface temperature losses were included in the weight calculations.
Planning assistance for the 30/20 GHz program, volume 2
NASA Technical Reports Server (NTRS)
Al-Kinani, G.; Frankfort, M.; Kaushal, D.; Markham, R.; Siperko, C.; Wall, M.
1981-01-01
In the baseline concept development the communications payload on Flight 1 was specified to consist of on-board trunking and emergency communications systems (ECS). On Flight 2 the communications payloads consisted of trunking and CPS on-board systems, the CPS capability replacing the Flight 1 ECS. No restriction was placed on the launch vehicle size. Constraints placed on multiple concept development effort were that launch vehicle size for Concept 1 was restricted to SUSS-D and for Concept 2 a SUSS-A. The design concept development was based on satisfying the baseline requirements set forth in the SOW for a single demonstration flight system. Key constraints on contractors were cost and launch vehicle size. Five major areas of new technology development were reviewed: (1) 30 GHz low noise receivers; (2) 20 GHz Power Amplifiers; (3) SS-TDMA switch; (4) Baseband Processor; (5) Multibeam Antennas.
Schollenberger, Martin; Radke, Wolfgang
2011-10-28
A gradient ranging from methanol to tetrahydrofuran (THF) was applied to a series of poly(methyl methacrylate) (PMMA) standards, using the recently developed concept of SEC-gradients. Contrasting to conventional gradients the samples eluted before the solvent, i.e. within the elution range typical for separations by SEC, however, the high molar mass PMMAs were retarded as compared to experiments on the same column using pure THF as the eluent. The molar mass dependence on retention volume showed a complex behaviour with a nearly molar mass independent elution for high molar masses. This molar mass dependence was explained in terms of solubility and size exclusion effects. The solubility based SEC-gradient was proven to be useful to separate PMMA and poly(n-butyl crylate) (PnBuA) from a poly(t-butyl crylate) (PtBuA) sample. These samples could be separated neither by SEC in THF, due to their very similar hydrodynamic volumes, nor by an SEC-gradient at adsorbing conditions, due to a too low selectivity. The example shows that SEC-gradients can be applied not only in adsorption/desorption mode, but also in precipitation/dissolution mode without risking blocking capillaries or breakthrough peaks. Thus, the new approach is a valuable alternative to conventional gradient chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
Semantic-gap-oriented active learning for multilabel image annotation.
Tang, Jinhui; Zha, Zheng-Jun; Tao, Dacheng; Chua, Tat-Seng
2012-04-01
User interaction is an effective way to handle the semantic gap problem in image annotation. To minimize user effort in the interactions, many active learning methods were proposed. These methods treat the semantic concepts individually or correlatively. However, they still neglect the key motivation of user feedback: to tackle the semantic gap. The size of the semantic gap of each concept is an important factor that affects the performance of user feedback. User should pay more efforts to the concepts with large semantic gaps, and vice versa. In this paper, we propose a semantic-gap-oriented active learning method, which incorporates the semantic gap measure into the information-minimization-based sample selection strategy. The basic learning model used in the active learning framework is an extended multilabel version of the sparse-graph-based semisupervised learning method that incorporates the semantic correlation. Extensive experiments conducted on two benchmark image data sets demonstrated the importance of bringing the semantic gap measure into the active learning process.
Activities in support of the wax-impregnated wallboard concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kedl, R.J.; Stovall, T.K.
1989-01-01
The concept of octadecane wax impregnated wallboard for the passive solar application is a major thrust of the Oak Ridge National Laboratory (ORNL) Thermal Energy Storage (TES) program. Thus, ORNL has initiated a number of internal efforts in support of this concept. The results of these efforts are: The immersion process for filling wallboard with wax has been successfully sealed up from small samples to full-size sheets; analysis shows that the immersion process has the potential for achieving higher storage capacity than adding wax filled pellets to wallboard during its manufacture; analysis indicates that 75/degree/F is close to an optimummore » phase change temperature for the non-passive solar application; and the thermal conductivity of wallboard without wax has been measured and will be measured for wax impregnated wallboard. In addition, efforts are underway to confirm an analytical model that handles phase change wallboard for the passive solar application. 4 refs., 10 figs.« less
Exploiting virtual sediment deposits to explore conceptual foundations
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2017-04-01
Geomorphic concepts and hypotheses are usually formulated based on empiric data from the field or the laboratory (deduction). After translation into models they can be applied to case study scenarios (induction). However, the other way around - expressing hypotheses explicitly by models and test these by empiric data - is a rarely touched trail. There are several models tailored to investigate the boundary conditions and processes that generate, mobilise, route and eventually deposit sediment in a landscape. Thereby, the last part, sediment deposition, is usually omitted. Essentially, there is no model that explicitly focuses on mapping out the characteristics of sedimentary deposits - the material that is used by many disciplines to reconstruct landscape evolution. This contribution introduces the R-package sandbox, a model framework that allows creating and analysing virtual sediment sections for exploratory, explanatory, forecasting and inverse research questions. The R-package sandbox is a probabilistic and rule-based model framework for a wide range of possible applications. The model framework is used here to discuss a set of conceptual questions revolving around geochemical and geochronological methods, such as: How does sample size and sample volume affect age uncertainty? What determines the robustness of sediment fingerprinting results? How does the prepared grain size of the material of interest affect the analysis outcomes? Most of the concepts used in geosciences are underpinned by a set of assumptions, whose robustness and boundary conditions need to be assessed quantitatively. The R-package sandbox is a universal and flexible tool to engage with this challenge.
Space-time precipitation extremes for urban hydrology
NASA Astrophysics Data System (ADS)
Bardossy, A.; Pegram, G. G. S.
2017-12-01
Precipitation extremes are essential for hydrological design. In urban hydrology intensity duration frequency curves (IDFs) are estimated from observation records to design sewer systems. The conventional approaches seldom consider the areal extent of events. If they do so, duration-dependent area reduction factors (ARFs) are applied. In this contribution we investigate the influence of the size of the target urban area on the frequency of occurrence of extremes. We introduce two new concepts, (i) the maximum over an area and (ii) the sub-areal extremes. The properties of these are discussed. The space-time dependence of extremes strongly influences these statistics. The findings of this presentation show that the risk of urban flooding is routinely underestimated. We do this by sampling a long sequence of radar rainfall fields of 1 km resolution, not the usual limited information from gauge records at scattered point locations. The procedure we use is to generate 20 years of plausible 'radar' fields of 5 minute precipitation on a square frame of 128x128 one kilometer pixels and sample them in a regimented way. In this presentation we find that the traditional calculations are underestimating the extremes [by up to 30 % to 50 % depending on size and duration] and we show how we can revise them sensibly. The methodology we devise from simulated radar fields is checked against the records of a dense network of pluviometers covered by a radar in Baden-Württemberg, with a (regrettably) short 4-year record, as proof of concept.
GEMINI SPACECRAFT - ARTIST CONCEPT
1964-01-01
S64-22331 (1964) --- Artist concept illustrating the relative sizes of the one-man Mercury spacecraft, the two-man Gemini spacecraft, and the three-man Apollo spacecraft. Also shows line drawing of launch vehicles to show their relative size in relation to each other. Photo credit: NASA
Structural Design and Sizing of a Metallic Cryotank Concept
NASA Technical Reports Server (NTRS)
Sleight, David W.; Martin, Robert A.; Johnson, Theodore F.
2013-01-01
This paper presents the structural design and sizing details of a 33-foot (10 m) metallic cryotank concept used as the reference design to compare with the composite cryotank concepts developed by industry as part of NASA s Composite Cryotank Technology Development (CCTD) Project. The structural design methodology and analysis results for the metallic cryotank concept are reported in the paper. The paper describes the details of the metallic cryotank sizing assumptions for the baseline and reference tank designs. In particular, the paper discusses the details of the cryotank weld land design and analyses performed to obtain a reduced weight metallic cryotank design using current materials and manufacturing techniques. The paper also discusses advanced manufacturing techniques to spin-form the cryotank domes and compares the potential mass savings to current friction stir-welded technology.
Preliminary Sizing of 120-Passenger Advanced Civil Rotorcraft Concepts
NASA Technical Reports Server (NTRS)
vanAken, Johannes M.; Sinsay, Jeffrey D.
2006-01-01
The results of a preliminary sizing study of advanced civil rotorcraft concepts that are capable of carrying 120 passengers over a range of 1,200 nautical miles are presented. The cruise altitude of these rotorcraft is 30,000 ft and the cruise velocity is 350 knots. The mission requires a hover capability, creating a runway independent solution, which might aid in reducing strain on the existing airport infrastructure. Concepts studied are a tiltrotor, a tandem rotor compound, and an advancing blade concept. The first objective of the study is to determine the relative merits of these designs in terms of mission gross weight, engine size, fuel weight, aircraft purchase price, and direct operating cost. The second objective is to identify the enabling technology for these advanced heavy lift civil rotorcraft.
Ogawa, Tatsuya; Omon, Kyohei; Yuda, Tomohisa; Ishigaki, Tomoya; Imai, Ryota; Ohmatsu, Satoko; Morioka, Shu
2016-01-01
Objective: To investigate the short-term effects of the life goal concept on subjective well-being and treatment engagement, and to determine the sample size required for a larger trial. Design: A quasi-randomized controlled trial that was not blinded. Setting: A subacute rehabilitation ward. Subjects: A total of 66 patients were randomized to a goal-setting intervention group with the life goal concept (Life Goal), a standard rehabilitation group with no goal-setting intervention (Control 1), or a goal-setting intervention group without the life goal concept (Control 2). Interventions: The goal-setting intervention in the Life Goal and Control 2 was Goal Attainment Scaling. The Life Goal patients were assessed in terms of their life goals, and the hierarchy of goals was explained. The intervention duration was four weeks. Main measures: Patients were assessed pre- and post-intervention. The outcome measures were the Hospital Anxiety and Depression Scale, 12-item General Health Questionnaire, Pittsburgh Rehabilitation Participation Scale, and Functional Independence Measure. Results: Of the 296 potential participants, 66 were enrolled; Life Goal (n = 22), Control 1 (n = 22) and Control 2 (n = 22). Anxiety was significantly lower in the Life Goal (4.1 ±3.0) than in Control 1 (6.7 ±3.4), but treatment engagement was significantly higher in the Life Goal (5.3 ±0.4) compared with both the Control 1 (4.8 ±0.6) and Control 2 (4.9 ±0.5). Conclusions: The life goal concept had a short-term effect on treatment engagement. A sample of 31 patients per group would be required for a fully powered clinical trial. PMID:27496700
Bounds on the sample complexity for private learning and private data release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi
2009-01-01
Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less
Emergent literacy profiles of preschool-age children with specific language impairment.
Cabell, Sonia Q; Lomax, Richard G; Justice, Laura M; Breit-Smith, Allison; Skibbe, Lori E; McGinty, Anita S
2010-12-01
The primary aim of the present study was to explore the heterogeneity of emergent literacy skills among preschool-age children with specific language impairment (SLI) through examination of profiles of performance. Fifty-nine children with SLI were assessed on a battery of emergent literacy skills (i.e., alphabet knowledge, print concepts, emergent writing, rhyme awareness) and oral language skills (i.e., receptive/expressive vocabulary and grammar). Cluster analysis techniques identified three emergent literacy profiles: (1) Highest Emergent Literacy, Strength in Alphabet Knowledge; (2) Average Emergent Literacy, Strength in Print Concepts; and (3) Lowest Emergent Literacy across Skills. After taking into account the contribution of child age, receptive and expressive language skills made a small contribution to the prediction of profile membership. The present findings, which may be characterized as exploratory given the relatively modest sample size, suggest that preschool-age children with SLI display substantial individual differences with regard to their emergent literacy skills and that these differences cannot be fully determined by children's age or oral language performance. Replication of the present findings with a larger sample of children is needed.
Kepler-186f, the First Earth-size Planet in the Habitable Zone Artist Concept
2014-04-17
This artist concept depicts Kepler-186f, the first validated Earth-size planet to orbit a distant star in the habitable zone, a range of distance from a star where liquid water might pool on the planet surface.
Propulsion system assessment for very high UAV under ERAST
NASA Technical Reports Server (NTRS)
Bettner, James L.; Blandford, Craig S.; Rezy, Bernie J.
1995-01-01
A series of propulsion systems were configured to power a sensor platform to very high altitudes under the Experimental Research Advanced Sensor Technology (ERAST) program. The unmanned aircraft was required to carry a 100 kg instrument package to 90,000 ft altitude, collect samples and make scientific measurements for 4 hr, and then return to base. A performance screening evaluation of 11 propulsion systems for this high altitude mission was conducted. Engine configurations ranged from turboprop, spark ignition, two- and four-stroke diesel, rotary, and fuel cell concepts. Turbo and non-turbo-compounded, recuperated and nonrecuperated arrangements, along with regular JP and hydrogen fuels were interrogated. Each configuration was carried through a preliminary design where all turbomachinery, heat exchangers, and engine core concepts were sized and weighed for near-optimum design point performance. Mission analysis, which sized the aircraft for each of the propulsion systems investigated, was conducted. From the array of configurations investigated, the propulsion system for each of three different technology levels (i.e., state of the art, near term, and far term) that was best suited for this very high altitude mission was identified and recommended for further study.
NASA Technical Reports Server (NTRS)
Stapelfeldt, Karl R.; Brenner, Michael P.; Warfield, Keith R.; Dekens, Frank G.; Belikov, Ruslan; Brugarolas, Paul B.; Bryden, Geoffrey; Cahoy, Kerri L.; Chakrabarti, Supriya; Dubovitsky, Serge;
2014-01-01
"Exo-C" is NASA's first community study of a modest aperture space telescope designed for high contrast observations of exoplanetary systems. The mission will be capable of taking optical spectra of nearby exoplanets in reflected light, discover previously undetected planets, and imaging structure in a large sample of circumstellar disks. It will obtain unique science results on planets down to super-Earth sizes and serve as a technology pathfinder toward an eventual flagship-class mission to find and characterize habitable exoplanets. We present the mission/payload design and highlight steps to reduce mission cost/risk relative to previous mission concepts. At the study conclusion in 2015, NASA will evaluate it for potential development at the end of this decade. Keywords: Exoplanets, high contrast imaging, optical astronomy, space mission concepts
Connecting coherent structures and strange attractors
NASA Technical Reports Server (NTRS)
Keefe, Laurence R.
1990-01-01
A concept of turbulence derived from nonlinear dynamical systems theory suggests that turbulent solutions to the Navier-Stokes equations are restricted to strange attractors, and, by implication, that turbulent phenomenology must find some expression or source in the structure of these mathematical objects. Examples and discussions are presented to link coherent structures to some of the commonly known characteristics of strange attractors. Basic to this link is a geometric interpretation of conditional sampling techniques employed to educe coherent structures that offers an explanation for their appearance in measurements as well as their size.
Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.
2004-01-01
NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Jones, Leslie C.; Pope, Alan T.
2003-01-01
Spatial disorientation (SD) is a constant contributing factor to the rate of fatal aviation accidents. SD occurs as a result of perceptual errors that can be attributed in part to the inefficient presentation of synthetic orientation cues via the attitude indicator when external visual conditions are poor. Improvements in the design of the attitude indicator may help to eliminate instrumentation as a factor in the onset of SD. The goal of the present study was to explore several display concepts that may contribute to an improved attitude display. Specifically, the effectiveness of various display sizes, some that are used in current and some that are anticipated in future attitude displays that may incorporate Synthetic Vision Systems (SVS) concepts, was assessed. In addition, a concept known as an extended horizon line or Malcolm Horizon (MH) was applied and evaluated. Paired with the MH, the novel concept of a fixed reference line representing the central horizontal plane of the aircraft was also tested. Subjects performance on an attitude control task and secondary math workload task was measured across the various display sizes and conditions. The results, with regard to display size, confirmed the bigger is better concept, yielding better performance with the larger display sizes. A clear and significant improvement in attitude task performance was found with the addition of the extended horizon line. The extended or MH seemed to equalize attitude performance across display sizes, even for a central or foveal display as small as three inches in width.
The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness.
Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen
2016-10-17
The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term "Big Data", which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.
The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness
Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen
2016-01-01
The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing. PMID:27763525
Hartadi, Yeusy; Widmann, Daniel; Behm, R Jürgen
2015-02-01
The potential of metal oxide supported Au catalysts for the formation of methanol from CO2 and H2 under conditions favorable for decentralized and local conversion, which could be concepts for chemical energy storage, was investigated. Significant differences in the catalytic activity and selectivity of Au/Al2 O3 , Au/TiO2 , AuZnO, and Au/ZrO2 catalysts for methanol formation under moderate reaction conditions at a pressure of 5 bar and temperatures between 220 and 240 °C demonstrate pronounced support effects. A high selectivity (>50 %) for methanol formation was obtained only for Au/ZnO. Furthermore, measurements on Au/ZnO samples with different Au particle sizes reveal distinct Au particle size effects: although the activity increases strongly with the decreasing particle size, the selectivity decreases. The consequences of these findings for the reaction mechanism and for the potential of Au/ZnO catalysts for chemical energy storage and a "green" methanol technology are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
A Mars Sample Return Sample Handling System
NASA Technical Reports Server (NTRS)
Wilson, David; Stroker, Carol
2013-01-01
We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory alloy caps [5] if the heating does not affect the sample, or by crimping caps similar to bottle capping. We prefer cap sealing surfaces be external to the cup rim to prevent sample dust inside the cups interfering with sealing, or, contamination of the sample by Teflon seal elements (if adopted). Finally the sample collection rover, or a Fetch rover, selects cups with best choice samples and loads them into a sample tray, before delivering it to the Earth Return Vehicle (ERV) in the MSR Dragon capsule as described in [1] (Fig 1). This ensures best use of the MSR payload mass allowance. A 3 meter long jointed robot arm is extended from the Dragon capsule's crew hatch, retrieves the sample tray and inserts it into the sample canister payload located on the ERV stage. The robot arm has capacity to obtain grab samples in the event of a rover failure. The sample canister has a robot arm capture casting to enable capture by crewed or robot spacecraft when it returns to Earth orbit
Matuszewski, Szymon; Frątczak-Łagiewska, Katarzyna
2018-02-05
Insects colonizing human or animal cadavers may be used to estimate post-mortem interval (PMI) usually by aging larvae or pupae sampled on a crime scene. The accuracy of insect age estimates in a forensic context is reduced by large intraspecific variation in insect development time. Here we test the concept that insect size at emergence may be used to predict insect physiological age and accordingly to improve the accuracy of age estimates in forensic entomology. Using results of laboratory study on development of forensically-useful beetle Creophilus maxillosus (Linnaeus, 1758) (Staphylinidae) we demonstrate that its physiological age at emergence [i.e. thermal summation value (K) needed for emergence] fall with an increase of beetle size. In the validation study it was found that K estimated based on the adult insect size was significantly closer to the true K as compared to K from the general thermal summation model. Using beetle length at emergence as a predictor variable and male or female specific model regressing K against beetle length gave the most accurate predictions of age. These results demonstrate that size of C. maxillosus at emergence improves accuracy of age estimates in a forensic context.
NASA Technical Reports Server (NTRS)
Luzhanskiy, Edward; Choa, Fow-Sen; Merritt, Scott; Yu, Anthony; Krainak, Michael
2015-01-01
The low complexity, low size, weight and power Mid-Wavelength Infra-Red optical communications transceiver concept presented, realized and tested in the laboratory environment. Resilience to atmospheric impairments analyzed with simulated turbulence. Performance compared to typical telecom based Short Wavelength Infra-Red transceiver.
Semantic size of abstract concepts: it gets emotional when you can't see it.
Yao, Bo; Vasiljevic, Milica; Weick, Mario; Sereno, Margaret E; O'Donnell, Patrick J; Sereno, Sara C
2013-01-01
Size is an important visuo-spatial characteristic of the physical world. In language processing, previous research has demonstrated a processing advantage for words denoting semantically "big" (e.g., jungle) versus "small" (e.g., needle) concrete objects. We investigated whether semantic size plays a role in the recognition of words expressing abstract concepts (e.g., truth). Semantically "big" and "small" concrete and abstract words were presented in a lexical decision task. Responses to "big" words, regardless of their concreteness, were faster than those to "small" words. Critically, we explored the relationship between semantic size and affective characteristics of words as well as their influence on lexical access. Although a word's semantic size was correlated with its emotional arousal, the temporal locus of arousal effects may depend on the level of concreteness. That is, arousal seemed to have an earlier (lexical) effect on abstract words, but a later (post-lexical) effect on concrete words. Our findings provide novel insights into the semantic representations of size in abstract concepts and highlight that affective attributes of words may not always index lexical access.
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Michael, E-mail: mvschaefer@mail.usf.edu, E-mail: axk650@case.edu, E-mail: mohan@case.edu, E-mail: schlaf@mail.usf.edu; Kumar, Ajay, E-mail: mvschaefer@mail.usf.edu, E-mail: axk650@case.edu, E-mail: mohan@case.edu, E-mail: schlaf@mail.usf.edu; Mohan Sankaran, R., E-mail: mvschaefer@mail.usf.edu, E-mail: axk650@case.edu, E-mail: mohan@case.edu, E-mail: schlaf@mail.usf.edu
Microplasma-assisted gas-phase nucleation has emerged as an important new approach to produce high-purity, nanometer-sized, and narrowly dispersed particles. This study aims to integrate this technique with vacuum conditions to enable synthesis and deposition in an ultrahigh vacuum compatible environment. The ultimate goal is to combine nanoparticle synthesis with photoemission spectroscopy-based electronic structure analysis. Such measurements require in vacuo deposition to prevent surface contamination from sample transfer, which can be deleterious for nanoscale materials. A homebuilt microplasma reactor was integrated into an existing atomic layer deposition system attached to a surface science multi-chamber system equipped with photoemission spectroscopy. As proof-of-concept, wemore » studied the decomposition of ferrocene vapor in the microplasma to synthesize iron oxide nanoparticles. The injection parameters were optimized to achieve complete precursor decomposition under vacuum conditions, and nanoparticles were successfully deposited. The stoichiometry of the deposited samples was characterized in situ using X-ray photoelectron spectroscopy indicating that iron oxide was formed. Additional transmission electron spectroscopy characterization allowed the determination of the size, shape, and crystal lattice of the particles, confirming their structural properties.« less
Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S
2017-11-20
Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.
Technology needs for high-speed rotorcraft
NASA Technical Reports Server (NTRS)
Rutherford, John; Orourke, Matthew; Martin, Christopher; Lovenguth, Marc; Mitchell, Clark
1991-01-01
A study to determine the technology development required for high-speed rotorcraft development was conducted. The study begins with an initial assessment of six concepts capable of flight at, or greater than 450 knots with helicopter-like hover efficiency (disk loading less than 50 pfs). These concepts were sized and evaluated based on measures of effectiveness and operational considerations. Additionally, an initial assessment of the impact of technology advances on the vehicles attributes was made. From these initial concepts a tilt wing and rotor/wing concepts were selected for further evaluation. A more detailed examination of conversion and technology trade studies were conducted on these two vehicles, each sized for a different mission.
MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization
Sarrazin, P.; Blake, D.; Gailhanou, M.; ...
2018-04-01
Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shownmore » that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.« less
MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization
NASA Astrophysics Data System (ADS)
Sarrazin, P.; Blake, D.; Gailhanou, M.; Marchis, F.; Chalumeau, C.; Webb, S.; Walter, P.; Schyns, E.; Thompson, K.; Bristow, T.
2018-04-01
Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shown that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.
MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrazin, P.; Blake, D.; Gailhanou, M.
Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shownmore » that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.« less
NASA Technical Reports Server (NTRS)
Li, C. H.; Busch, G.; Creter, C.
1976-01-01
The Metals Melting Skylab Experiment consisted of selectively melting, in sequence, three rotating discs made of aluminum alloy, stainless steel, and tantalum alloy. For comparison, three other discs of the same three materials were similarly melted or welded on the ground. The power source of the melting was an electron beam unit. Results are presented which support the concept that the major difference between ground base and Skylab samples (i.e., large elongated grains in ground base samples versus nearly equiaxed and equal sized grains in Skylab samples) can be explained on the basis of constitutional supercooling, and not on the basis of surface phenomena. Microstructural observations on the weld samples and present explanations for some of these observations are examined. In particular, ripples and their implications to weld solidification were studied. Evidence of pronounced copper segregation in the Skylab A1 weld samples, and the tantalum samples studied, indicates a weld microhardness (and hence strength) that is uniformly higher than the ground base results, which is in agreement with previous predictions. Photographs are shown of the microstructure of the various alloys.
Hybrid Wing Body Configuration System Studies
NASA Technical Reports Server (NTRS)
Nickol, Craig L.; McCullers, Arnie
2009-01-01
The objective of this study was to develop a hybrid wing body (HWB) sizing and analysis capability, apply that capability to estimate the fuel burn potential for an HWB concept, and identify associated technology requirements. An advanced tube with wings concept was also developed for comparison purposes. NASA s Flight Optimization System (FLOPS) conceptual aircraft sizing and synthesis software was modified to enable the sizing and analysis of HWB concepts. The noncircular pressurized centerbody of the HWB concept was modeled, and several options were created for defining the outboard wing sections. Weight and drag estimation routines were modified to accommodate the unique aspects of an HWB configuration. The resulting capability was then utilized to model a proprietary Boeing blended wing body (BWB) concept for comparison purposes. FLOPS predicted approximately a 15 percent greater drag, mainly caused by differences in compressibility drag estimation, and approximately a 5 percent greater takeoff gross weight, mainly caused by the additional fuel required, as compared with the Boeing data. Next, a 777-like reference vehicle was modeled in FLOPS and calibrated to published Boeing performance data; the same mission definition was used to size an HWB in FLOPS. Advanced airframe and propulsion technology assumptions were applied to the HWB to develop an estimate for potential fuel burn savings from such a concept. The same technology assumptions, where applicable, were then applied to an advanced tube-with-wings concept. The HWB concept had a 39 percent lower block fuel burn than the reference vehicle and a 12 percent lower block fuel burn than the advanced tube-with-wings configuration. However, this fuel burn advantage is partially derived from assuming the high-risk technology of embedded engines with boundary-layer-ingesting inlets. The HWB concept does have the potential for significantly reduced noise as a result of the shielding advantages that are inherent with an over-body engine installation.
NASA Technical Reports Server (NTRS)
Nickol, Craig L.; Haller, William J.
2016-01-01
NASA's Environmentally Responsible Aviation (ERA) project has matured technologies to enable simultaneous reductions in fuel burn, noise, and nitrogen oxide (NOx) emissions for future subsonic commercial transport aircraft. The fuel burn reduction target was a 50% reduction in block fuel burn (relative to a 2005 best-in-class baseline aircraft), utilizing technologies with an estimated Technology Readiness Level (TRL) of 4-6 by 2020. Progress towards this fuel burn reduction target was measured through the conceptual design and analysis of advanced subsonic commercial transport concepts spanning vehicle size classes from regional jet (98 passengers) to very large twin aisle size (400 passengers). Both conventional tube-and-wing (T+W) concepts and unconventional (over-wing-nacelle (OWN), hybrid wing body (HWB), mid-fuselage nacelle (MFN)) concepts were developed. A set of propulsion and airframe technologies were defined and integrated onto these advanced concepts which were then sized to meet the baseline mission requirements. Block fuel burn performance was then estimated, resulting in reductions relative to the 2005 best-in-class baseline performance ranging from 39% to 49%. The advanced single-aisle and large twin aisle T+W concepts had reductions of 43% and 41%, respectively, relative to the 737-800 and 777-200LR aircraft. The single-aisle OWN concept and the large twin aisle class HWB concept had reductions of 45% and 47%, respectively. In addition to their estimated fuel burn reduction performance, these unconventional concepts have the potential to provide significant noise reductions due, in part, to engine shielding provided by the airframe. Finally, all of the advanced concepts also have the potential for significant NOx emissions reductions due to the use of advanced combustor technology. Noise and NOx emissions reduction estimates were also generated for these concepts as part of the ERA project.
The NIMH Research Domain Criteria Initiative: Background, Issues, and Pragmatics.
Kozak, Michael J; Cuthbert, Bruce N
2016-03-01
This article describes the National Institute of Mental Health's Research Domain Criteria (RDoC) initiative. The description includes background, rationale, goals, and the way the initiative has been developed and organized. The central RDoC concepts are summarized and the current matrix of constructs that have been vetted by workshops of extramural scientists is depicted. A number of theoretical and methodological issues that can arise in connection with the nature of RDoC constructs are highlighted: subjectivism and heterophenomenology, desynchrony and theoretical neutrality among units of analysis, theoretical reductionism, endophenotypes, biomarkers, neural circuits, construct "grain size," and analytic challenges. The importance of linking RDoC constructs to psychiatric clinical problems is discussed. Some pragmatics of incorporating RDoC concepts into applications for NIMH research funding are considered, including sampling design. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Wang, Bo; Ji, Jing; Li, Kang
2016-09-01
Currently, production of porous polymeric membranes for filtration is predominated by the phase-separation process. However, this method has reached its technological limit, and there have been no significant breakthrough over the last decade. Here we show, using polyvinylidene fluoride as a sample polymer, a new concept of membrane manufacturing by combining oriented green solvent crystallization and polymer migration is able to obtain high performance membranes with pure water permeation flux substantially higher than those with similar pore size prepared by conventional phase-separation processes. The new manufacturing procedure is governed by fewer operating parameters and is, thus, easier to control with reproducible results. Apart from the high water permeation flux, the prepared membranes also show excellent stable flux after fouling and superior mechanical properties of high pressure load and better abrasion resistance. These findings demonstrate the promise of a new concept for green manufacturing nanostructured polymeric membranes with high performances.
ERIC Educational Resources Information Center
Hall, Peter M.; Spencer-Hall, Dee Ann
A study of two small-to-middle-sized midwestern school districts, each observed for over a year, shows that the negotiated order concept can provide a useful framework for viewing schools' organizational functions. According to the negotiated order concept, organizational relationships require constant negotiations concerning values, goals, rules,…
USDA-ARS?s Scientific Manuscript database
The objectives of this work were to estimate genetic effects for age and size at estimated time of first conception, and temperament in straightbred and crossbred heifers (n = 554) produced from Romosinuano, Brahman, and Angus cattle, and to evaluate first parturition performance of heifers, includi...
Confirmatory factor analysis applied to the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Eaton, Philip; Willoughby, Shannon D.
2018-06-01
In 1995, Huffman and Heller used exploratory factor analysis to draw into question the factors of the Force Concept Inventory (FCI). Since then several papers have been published examining the factors of the FCI on larger sets of student responses and understandable factors were extracted as a result. However, none of these proposed factor models have been verified to not be unique to their original sample through the use of independent sets of data. This paper seeks to confirm the factor models proposed by Scott et al. in 2012, and Hestenes et al. in 1992, as well as another expert model proposed within this study through the use of confirmatory factor analysis (CFA) and a sample of 20 822 postinstruction student responses to the FCI. Upon application of CFA using the full sample, all three models were found to fit the data with acceptable global fit statistics. However, when CFA was performed using these models on smaller sample sizes the models proposed by Scott et al. and Eaton and Willoughby were found to be far more stable than the model proposed by Hestenes et al. The goodness of fit of these models to the data suggests that the FCI can be scored on factors that are not unique to a single class. These scores could then be used to comment on how instruction methods effect the performance of students along a single factor and more in-depth analyses of curriculum changes may be possible as a result.
Advanced Technology Display House. Volume 2: Energy system design concepts
NASA Technical Reports Server (NTRS)
Maund, D. H.
1981-01-01
The preliminary design concept for the energy systems in the Advanced Technology Display House is analyzed. Residential energy demand, energy conservation, and energy concepts are included. Photovoltaic arrays and REDOX (reduction oxidation) sizes are discussed.
Observations of Superwinds in Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Heckman, T. M.; Wyse, R.; Schommer, R.
1993-12-01
Dwarf galaxies are important in developing our understanding of the formation and evolution of galaxies, and of the structure in the universe. The concept of supernova-driven mass outflows is a vital ingredient in theories of the structure and evolution of dwarfs galaxies. We have begun a detailed multi-waveband search for outflows in starbursting dwarf galaxies, and have obtained Fabry-Perot images and Echelle spectra of 20 nearby actively-star-forming dwarf galaxies. In about half the sample, the Fabry-Perot Hα images show loops and filaments with sizes of one to a few kpc. The Echelle spectra taken through the loops and filaments show kinematics consistent with expanding bubble-like structures. We describe these data, and present seven dwarfs in our sample that have the strongest evidence of outflows.
Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution
NASA Astrophysics Data System (ADS)
Samohyl, Robert Wayne
2017-10-01
This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.
Maraia Capsule Flight Testing and Results for Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.; Strahan, Alan L.
2016-01-01
The Maraia concept is a modest size (150 lb., 30" diameter) capsule that has been proposed as an ISS based, mostly autonomous earth return capability to function either as an Entry, Descent, and Landing (EDL) technology test platform or as a small on-demand sample return vehicle. A flight test program has been completed including high altitude balloon testing of the proposed capsule shape, with the purpose of investigating aerodynamics and stability during the latter portion of the entry flight regime, along with demonstrating a potential recovery system. This paper includes description, objectives, and results from the test program.
Semantic Size of Abstract Concepts: It Gets Emotional When You Can’t See It
Yao, Bo; Vasiljevic, Milica; Weick, Mario; Sereno, Margaret E.; O’Donnell, Patrick J.; Sereno, Sara C.
2013-01-01
Size is an important visuo-spatial characteristic of the physical world. In language processing, previous research has demonstrated a processing advantage for words denoting semantically “big” (e.g., jungle) versus “small” (e.g., needle) concrete objects. We investigated whether semantic size plays a role in the recognition of words expressing abstract concepts (e.g., truth). Semantically “big” and “small” concrete and abstract words were presented in a lexical decision task. Responses to “big” words, regardless of their concreteness, were faster than those to “small” words. Critically, we explored the relationship between semantic size and affective characteristics of words as well as their influence on lexical access. Although a word’s semantic size was correlated with its emotional arousal, the temporal locus of arousal effects may depend on the level of concreteness. That is, arousal seemed to have an earlier (lexical) effect on abstract words, but a later (post-lexical) effect on concrete words. Our findings provide novel insights into the semantic representations of size in abstract concepts and highlight that affective attributes of words may not always index lexical access. PMID:24086421
ERIC Educational Resources Information Center
Cicchetti, Domenic V.; Koenig, Kathy; Klin, Ami; Volkmar, Fred R.; Paul, Rhea; Sparrow, Sara
2011-01-01
The objectives of this report are: (a) to trace the theoretical roots of the concept clinical significance that derives from Bayesian thinking, Marginal Utility/Diminishing Returns in Economics, and the "just noticeable difference", in Psychophysics. These concepts then translated into: Effect Size (ES), strength of agreement, clinical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kiwoo; Natsui, Takuya; Hirai, Shunsuke
2011-06-01
One of the advantages of applying X-band linear accelerator (Linac) is the compact size of the whole system. That shows us the possibility of on-site system such as the custom inspection system in an airport. As X-ray source, we have developed X-band Linac and achieved maximum X-ray energy 950 keV using the low power magnetron (250 kW) in 2 {mu}s pulse length. The whole size of the Linac system is 1x1x1 m{sup 3}. That is realized by introducing X-band system. In addition, we have designed two-fold scintillator detector in dual energy X-ray concept. Monte carlo N-particle transport (MCNP) code wasmore » used to make up sensor part of the design with two scintillators, CsI and CdWO4. The custom inspection system is composed of two equipments: 950 keV X-band Linac and two-fold scintillator and they are operated simulating real situation such as baggage check in an airport. We will show you the results of experiment which was performed with metal samples: iron and lead as targets in several conditions.« less
Evaluation of a Person-Centered, Theory-Based Intervention to Promote Health Behaviors.
Worawong, Chiraporn; Borden, Mary Jo; Cooper, Karen M; Pérez, Oscar A; Lauver, Diane
Effective promotion of health behaviors requires strong interventions. Applying person-centered approaches and concepts synthesized from two motivational theories could strengthen the effects of such interventions. The aim of the study was to report the effect sizes, fidelity, and acceptability of a person-centered, health behavior intervention based on self-regulation and self-determination theories. Using a pre- and postintervention design, with a 4-week follow-up, advanced practice registered nurses made six weekly contacts with 52 volunteer participants. Most participants were educated White women. Advanced practice registered nurses elicited participant motives and particular goals for either healthy diet or physical activity behaviors. Minutes and type of activity and servings of fat and fruit/vegetables were assessed. Effect sizes for engaging in moderate aerobic activity and in fruit/vegetable and fat intake were 0.53, 0.82, and -0.57, respectively. The fidelity of delivery was 80-97% across contacts, and fidelity of participants' receipt of intervention components was supported. Participant acceptance of the intervention was supported by positive ratings on aspects of relevance and usefulness. To advance the science of health behavior change and improve client health status, person-centered approaches and concepts synthesized from motivational theories can be applied and tested with a randomized, controlled design and diverse samples to replicate and extend this promising behavioral intervention.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Load management as a smart grid concept for sizing and designing of hybrid renewable energy systems
NASA Astrophysics Data System (ADS)
Eltamaly, Ali M.; Mohamed, Mohamed A.; Al-Saud, M. S.; Alolah, Abdulrahman I.
2017-10-01
Optimal sizing of hybrid renewable energy systems (HRES) to satisfy load requirements with the highest reliability and lowest cost is a crucial step in building HRESs to supply electricity to remote areas. Applying smart grid concepts such as load management can reduce the size of HRES components and reduce the cost of generated energy considerably. In this article, sizing of HRES is carried out by dividing the load into high- and low-priority parts. The proposed system is formed by a photovoltaic array, wind turbines, batteries, fuel cells and a diesel generator as a back-up energy source. A smart particle swarm optimization (PSO) algorithm using MATLAB is introduced to determine the optimal size of the HRES. The simulation was carried out with and without division of the load to compare these concepts. HOMER software was also used to simulate the proposed system without dividing the loads to verify the results obtained from the proposed PSO algorithm. The results show that the percentage of division of the load is inversely proportional to the cost of the generated energy.
Verhagen, Simone J. W.; Simons, Claudia J. P.; van Zelst, Catherine; Delespaul, Philippe A. E. G.
2017-01-01
Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a ‘behavior setting’) with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available. PMID:29163294
Verhagen, Simone J W; Simons, Claudia J P; van Zelst, Catherine; Delespaul, Philippe A E G
2017-01-01
Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a 'behavior setting') with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available.
Auracher, Jan
2017-01-01
The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support-or even contradicted-the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.
Song, Zhuonan; Huang, Yi; Xu, Weiwei L.; Wang, Lei; Bao, Yu; Li, Shiguang; Yu, Miao
2015-01-01
Zeolites/molecular sieves with uniform, molecular-sized pores are important for many adsorption-based separation processes. Pore size gaps, however, exist in the current zeolite family. This leads to a great challenge of separating molecules with size differences at ~0.01 nm level. Here, we report a novel concept, pore misalignment, to form a continuously adjustable, molecular-sieving “gate” at the 5A zeolite pore entrance without sacrificing the internal capacity. Misalignment of the micropores of the alumina coating with the 5A zeolite pores was related with and facilely adjusted by the coating thickness. For the first time, organic molecules with sub-0.01 nm size differences were effectively distinguished via appropriate misalignment. This novel concept may have great potential to fill the pore size gaps of the zeolite family and realize size-selective adsorption separation. PMID:26358480
Song, Zhuonan; Huang, Yi; Xu, Weiwei L.; ...
2015-09-11
Zeolites/molecular sieves with uniform, molecular-sized pores are important for many adsorption-based separation processes. Pore size gaps, however, exist in the current zeolite family. This leads to a great challenge of separating molecules with size differences at ~0.01 nm level. Here, we report a novel concept, pore misalignment, to form a continuously adjustable, molecular-sieving “gate” at the 5A zeolite pore entrance without sacrificing the internal capacity. Misalignment of the micropores of the alumina coating with the 5A zeolite pores was related with and facilely adjusted by the coating thickness. For the first time, organic molecules with sub-0.01 nm size differences weremore » effectively distinguished via appropriate misalignment. Lastly, this novel concept may have great potential to fill the pore size gaps of the zeolite family and realize size-selective adsorption separation.« less
[The research protocol III. Study population].
Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe
2016-01-01
The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.
Kawakami, Tsuyoshi; Isama, Kazuo; Ikarashi, Yoshiaki
2015-01-01
Japan has published safety guideline on waterproof aerosol sprays. Furthermore, the Aerosol Industry Association of Japan has adopted voluntary regulations on waterproof aerosol sprays. Aerosol particles of diameter less than 10 µm are considered as "fine particles". In order to avoid acute lung injury, this size fraction should account for less than 0.6% of the sprayed aerosol particles. In contrast, the particle size distribution of aerosols released by hand-pump sprays containing fluorine-based or silicone-based compounds have not been investigated in Japan. Thus, the present study investigated the aerosol particle size distribution of 16 household hand-pump sprays. In 4 samples, the ratio of fine particles in aerosols exceeded 0.6%. This study confirmed that several hand-pump sprays available in the Japanese market can spray fine particles. Since the hand-pump sprays use water as a solvent and their ingredients may be more hydrophilic than those of aerosol sprays, the concepts related to the safety of aerosol-sprays do not apply to the hand pump sprays. Therefore, it may be required for the hand-pump spray to develop a suitable method for evaluating the toxicity and to establish the safety guideline.
Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph
2005-06-01
The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.
Marijuana as a 'concept' flavour for cigar products: availability and price near California schools.
Henriksen, Lisa; Schleicher, Nina C; Ababseh, Kimberly; Johnson, Trent O; Fortmann, Stephen P
2017-10-12
To assess the retail availability of cigar products that refer to marijuana and the largest package size of cigarillos available for ≤$1. Trained data collectors conducted marketing surveillance in a random sample of licensed tobacco retailers that sold little cigars/cigarillos (LCCs) (n=530) near a statewide sample of middle and high schools (n=132) in California. Multilevel models examined the presence of marijuana co-marketing and cigarillo pack size as a function of school/neighbourhood characteristics and adjusted for store type. Of stores that sold LCCs, approximately 62% contained at least one form of marijuana co-marketing: 53.2% sold cigar wraps marketed as blunt wraps, 27.2% sold cigarillos marketed as blunts and 26.0% sold at least one LCC with a marijuana-related 'concept' flavour. Controlling for store type, marijuana co-marketing was more prevalent in school neighbourhoods with a higher proportion of young residents (ages 5-17 years) and with lower median household income. Nearly all stores that sold LCCs (87.9%) offered the products for ≤$1. However, significantly larger packs at similarly low prices were available near schools in lower-income neighbourhoods and with a lower percentage of Hispanic students. Understanding how the tobacco industry manipulates cigar products and marketing to capitalise on the appeal of marijuana to youth and other priority populations is important to inform regulation, particularly for flavoured tobacco products. In addition, the retail availability of five and six packs of LCCs for ≤$1 near California schools underscores policy recommendations to establish minimum prices for multipacks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Richter, Joseph; Siegmund, Anja
2011-01-01
Systemic counselling and therapy are usually verbal interventions. However, communication on an abstract level often exceeds the capabilities of children up to about 12 years, leaving them less involved in the therapeutic process. In contrast, symbolic play has been shown to be an effective tool for psychological formulation and intervention. However, it has not been widely used so far in family therapy. In order to explore this hypothesis a form of systemic family therapy (SB; exclusively verbal) was compared with a new concept called systemic-psychomotor family counselling (PsyFam; based on symbolic play). We found good efficacy of PsyFam, reflected in an average effect size of d = .73 (SB: d = .53), even though statistical significance of the group effect could not be shown due to the small sample size. Systemic-psychomotoric family counselling is a promising new approach worth further research in controlled therapy studies.
Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.
Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Gern, Frank H.
2016-01-01
Simultaneously achieving the fuel consumption and noise reduction goals set forth by NASA's Environmentally Responsible Aviation (ERA) project requires innovative and unconventional aircraft concepts. In response, advanced hybrid wing body (HWB) aircraft concepts have been proposed and analyzed as a means of meeting these objectives. For the current study, several HWB concepts were analyzed using the Hybrid wing body Conceptual Design and structural optimization (HCDstruct) analysis code. HCDstruct is a medium-fidelity finite element based conceptual design and structural optimization tool developed to fill the critical analysis gap existing between lower order structural sizing approaches and detailed, often finite element based sizing methods for HWB aircraft concepts. Whereas prior versions of the tool used a half-model approach in building the representative finite element model, a full wing-tip-to-wing-tip modeling capability was recently added to HCDstruct, which alleviated the symmetry constraints at the model centerline in place of a free-flying model and allowed for more realistic center body, aft body, and wing loading and trim response. The latest version of HCDstruct was applied to two ERA reference cases, including the Boeing Open Rotor Engine Integration On an HWB (OREIO) concept and the Boeing ERA-0009H1 concept, and results agreed favorably with detailed Boeing design data and related Flight Optimization System (FLOPS) analyses. Following these benchmark cases, HCDstruct was used to size NASA's ERA HWB concepts and to perform a related scaling study.
An Alternative View of Forest Sampling
Francis A. Roesch; Edwin J. Green; Charles T. Scott
1993-01-01
A generalized concept is presented for all of the commonly used methods of forest sampling. The concept views the forest as a two-dimensional picture which is cut up into pieces like a jigsaw puzzle, with the pieces defined by the individual selection probabilities of the trees in the forest. This concept results in a finite number of independently selected sample...
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Discussion of thermal extraction chamber concepts for Lunar ISRU
NASA Astrophysics Data System (ADS)
Pfeiffer, Matthias; Hager, Philipp; Parzinger, Stephan; Dirlich, Thomas; Spinnler, Markus; Sattelmayer, Thomas; Walter, Ulrich
The Exploration group of the Institute of Astronautics (LRT) of the Technische Universitüt a München focuses on long-term scenarios and sustainable human presence in space. One of the enabling technologies in this long-term perspective is in-situ resource utilization (ISRU). When dealing with the prospect of future manned missions to Moon and Mars the use of ISRU seems useful and intended. The activities presented in this paper focus on Lunar ISRU. This basically incorporates both the exploitation of Lunar oxygen from natural rock and the extraction of solar wind implanted particles (SWIP) from regolith dust. Presently the group at the LRT is examining possibilities for the extraction of SWIPs, which may provide several gaseous components (such as H2 and N2) valuable to a human presence on the Moon. As a major stepping stone in the near future a Lunar demonstrator/ verification experiment payload is being designed. This experiment, LUISE (LUnar ISru Experiment), will comprise a thermal process chamber for heating regolith dust (grain size below 500m), a solar thermal power supply, a sample distribution unit and a trace gas analysis. The first project stage includes the detailed design and analysis of the extraction chamber concepts and the thermal process involved in the removal of SWIP from Lunar Regolith dust. The technique of extracting Solar Wind volatiles from Regolith has been outlined by several sources. Heating the material to a threshold value seems to be the most reasonable approach. The present paper will give an overview over concepts for thermal extraction chambers to be used in the LUISE project and evaluate in detail the pros and cons of each concept. The special boundary conditions set by solar thermal heating of the chambers as well as the material properties of Regolith in a Lunar environment will be discussed. Both greatly influence the design of the extraction chamber. The performance of the chamber concepts is discussed with respect to the desired target temperature using ESARAD/ESATAN software. Additionally a value for the homogeneity of heating the sample, as a measure for the effectiveness of the concept, will be presented and discussed.
Variation of mercury in fish from Massachusetts lakes based on ecoregion and lake trophic status
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, J.; Hutcheson, M.; West, C.R.
1995-12-31
Twenty-four of the state`s least-impacted waterbodies were sampled for sediment, water, physical characteristics and 3 species of fish to determine the extent of, and patterns of variation in, mercury contamination. Sampling effort was apportioned among three different ecological subregions of the state, as defined by EPA, and among lakes of differing trophic status. The authors sought to partition the variance to discover if these broadly defined concepts are suitable predictors of mercury levels in fish. Mean fish mercury was 0.14 ppm wet weight in samples of 168 of the bottom feeding brown bullheads (Ameriurus nebulosus) (range = 0.01--0.79 ppm); 0.3more » ppm in 199 of the omnivorous yellow perch (Perca flavescens) (range = 0.01--0.75 ppm); and 0.4 ppm in samples of 152 of the predaceous largemouth bass (Micropterus salmoides) (range = 0.05--1.1 ppm). Multivariate statistics are employed to determine how mercury concentrations in fish correlate with sediment chemistry, water chemistry, fish trophic status, fish size and age, lake and watershed size, the presence and extent of wetlands in the watershed, and physical characteristics of the lake. The survey design complements ongoing efforts begun in 1983 to test fish in a variety of waters, from which emanated fish advisories for impacted rivers and lakes. The study defines a baseline for fish contamination in Massachusetts lakes and ponds that serves as a template for public health decisions regarding fish consumption.« less
Concept development of a Mach 3.0 high-speed civil transport
NASA Technical Reports Server (NTRS)
Robins, A. Warner; Dollyhigh, Samuel M.; Beissner, Fred L., Jr.; Geiselhart, Karl; Martin, Glenn L.; Shields, E. W.; Swanson, E. E.; Coen, Peter G.; Morris, Shelby J., Jr.
1988-01-01
A baseline concept for a Mach 3.0 high-speed civil transport concept was developed as part of a national program with the goal that concepts and technologies be developed which will enable an effective long-range high-speed civil transport system. The Mach 3.0 concept reported represents an aggressive application of advanced technology to achieve the design goals. The level of technology is generally considered to be that which could have a demonstrated availability date of 1995 to 2000. The results indicate that aircraft are technically feasible that could carry 250 passengers at Mach 3.0 cruise for a 6500 nautical mile range at a size, weight and performance level that allows it to fit into the existing world airport structure. The details of the configuration development, aerodynamic design, propulsion system design and integration, mass properties, mission performance, and sizing are presented.
Considering aspects of the 3Rs principles within experimental animal biology.
Sneddon, Lynne U; Halsey, Lewis G; Bury, Nic R
2017-09-01
The 3Rs - Replacement, Reduction and Refinement - are embedded into the legislation and guidelines governing the ethics of animal use in experiments. Here, we consider the advantages of adopting key aspects of the 3Rs into experimental biology, represented mainly by the fields of animal behaviour, neurobiology, physiology, toxicology and biomechanics. Replacing protected animals with less sentient forms or species, cells, tissues or computer modelling approaches has been broadly successful. However, many studies investigate specific models that exhibit a particular adaptation, or a species that is a target for conservation, such that their replacement is inappropriate. Regardless of the species used, refining procedures to ensure the health and well-being of animals prior to and during experiments is crucial for the integrity of the results and legitimacy of the science. Although the concepts of health and welfare are developed for model organisms, relatively little is known regarding non-traditional species that may be more ecologically relevant. Studies should reduce the number of experimental animals by employing the minimum suitable sample size. This is often calculated using power analyses, which is associated with making statistical inferences based on the P -value, yet P -values often leave scientists on shaky ground. We endorse focusing on effect sizes accompanied by confidence intervals as a more appropriate means of interpreting data; in turn, sample size could be calculated based on effect size precision. Ultimately, the appropriate employment of the 3Rs principles in experimental biology empowers scientists in justifying their research, and results in higher-quality science. © 2017. Published by The Company of Biologists Ltd.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Statistical sampling of the distribution of uranium deposits using geologic/geographic clusters
Finch, W.I.; Grundy, W.D.; Pierson, C.T.
1992-01-01
The concept of geologic/geographic clusters was developed particularly to study grade and tonnage models for sandstone-type uranium deposits. A cluster is a grouping of mined as well as unmined uranium occurrences within an arbitrary area about 8 km across. A cluster is a statistical sample that will reflect accurately the distribution of uranium in large regions relative to various geologic and geographic features. The example of the Colorado Plateau Uranium Province reveals that only 3 percent of the total number of clusters is in the largest tonnage-size category, greater than 10,000 short tons U3O8, and that 80 percent of the clusters are hosted by Triassic and Jurassic rocks. The distributions of grade and tonnage for clusters in the Powder River Basin show a wide variation; the grade distribution is highly variable, reflecting a difference between roll-front deposits and concretionary deposits, and the Basin contains about half the number in the greater-than-10,000 tonnage-size class as does the Colorado Plateau, even though it is much smaller. The grade and tonnage models should prove useful in finding the richest and largest uranium deposits. ?? 1992 Oxford University Press.
Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.
Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan
2016-07-01
Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression data from a malaria vaccine trial by big-data-based edge biomarkers from module network rewiring-analysis. The illustrative results show that the identified module biomarkers can accurately distinguish vaccines with or without protection and outperformed previous reported gene signatures in terms of effectiveness and efficiency. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Effects of music therapy for children and adolescents with psychopathology: a meta-analysis.
Gold, Christian; Voracek, Martin; Wigram, Tony
2004-09-01
The objectives of this review were to examine the overall efficacy of music therapy for children and adolescents with psychopathology, and to examine how the size of the effect of music therapy is influenced by the type of pathology, client's age, music therapy approach, and type of outcome. Eleven studies were included for analysis, which resulted in a total of 188 subjects for the meta-analysis. Effect sizes from these studies were combined, with weighting for sample size, and their distribution was examined. After exclusion of an extreme positive outlying value, the analysis revealed that music therapy has a medium to large positive effect (ES =.61) on clinically relevant outcomes that was statistically highly significant (p <.001) and statistically homogeneous. No evidence of a publication bias was identified. Effects tended to be greater for behavioural and developmental disorders than for emotional disorders; greater for eclectic, psychodynamic, and humanistic approaches than for behavioural models; and greater for behavioural and developmental outcomes than for social skills and self-concept. Implications for clinical practice and research are discussed.
Measurement of variation in soil solute tracer concentration across a range of effective pore sizes
Harvey, Judson W.
1993-01-01
Solute transport concepts in soil are based on speculation that solutes are distributed nonuniformly within large and small pores. Solute concentrations have not previously been measured across a range of pore sizes and examined in relation to soil hydrological properties. For this study, modified pressure cells were used to measure variation in concentration of a solute tracer across a range of pore sizes. Intact cores were removed from the site of a field tracer experiment, and soil water was eluted from 10 or more discrete classes of pore size. Simultaneous changes in water content and unsaturated hydraulic conductivity were determined on cores using standard pressure cell techniques. Bromide tracer concentration varied by as much as 100% across the range of pore sizes sampled. Immediately following application of the bromide tracer on field plots, bromide was most concentrated in the largest pores; concentrations were lower in pores of progressively smaller sizes. After 27 days, bromide was most dilute in the largest pores and concentrations were higher in the smaller pores. A sharp, threefold decrease in specific water capacity during elution indicated separation of two major pore size classes at a pressure of 47 cm H2O and a corresponding effective pore diameter of 70 μm. Variation in tracer concentration, on the other hand, was spread across the entire range of pore sizes investigated in this study. A two-porosity characterization of the transport domain, based on water retention criteria, only broadly characterized the pattern of variation in tracer concentration across pore size classes during transport through a macroporous soil.
Phase II cancer clinical trials for biomarker-guided treatments.
Jung, Sin-Ho
2018-01-01
The design and analysis of cancer clinical trials with biomarker depend on various factors, such as the phase of trials, the type of biomarker, whether the used biomarker is validated or not, and the study objectives. In this article, we demonstrate the design and analysis of two Phase II cancer clinical trials, one with a predictive biomarker and the other with an imaging prognostic biomarker. Statistical testing methods and their sample size calculation methods are presented for each trial. We assume that the primary endpoint of these trials is a time to event variable, but this concept can be used for any type of endpoint.
Hybrid Wing Body Configuration Scaling Study
NASA Technical Reports Server (NTRS)
Nickol, Craig L.
2012-01-01
The Hybrid Wing Body (HWB) configuration is a subsonic transport aircraft concept with the potential to simultaneously reduce fuel burn, noise and emissions compared to conventional concepts. Initial studies focused on very large applications with capacities for up to 800 passengers. More recent studies have focused on the large, twin-aisle class with passenger capacities in the 300-450 range. Efficiently scaling this concept down to the single aisle or smaller size is challenging due to geometric constraints, potentially reducing the desirability of this concept for applications in the 100-200 passenger capacity range or less. In order to quantify this scaling challenge, five advanced conventional (tube-and-wing layout) concepts were developed, along with equivalent (payload/range/technology) HWB concepts, and their fuel burn performance compared. The comparison showed that the HWB concepts have fuel burn advantages over advanced tube-and-wing concepts in the larger payload/range classes (roughly 767-sized and larger). Although noise performance was not quantified in this study, the HWB concept has distinct noise advantages over the conventional tube-and-wing configuration due to the inherent noise shielding features of the HWB. NASA s Environmentally Responsible Aviation (ERA) project will continue to investigate advanced configurations, such as the HWB, due to their potential to simultaneously reduce fuel burn, noise and emissions.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
STS mission duration enhancement study: (orbiter habitability)
NASA Technical Reports Server (NTRS)
Carlson, A. D.
1979-01-01
Habitability improvements for early flights that could be implemented with minimum impact were investigated. These included: (1) launching the water dispenser in the on-orbit position instead of in a locker; (2) the sleep pallet concept; and (3) suction cup foot restraints. Past studies that used volumetric terms and requirements for crew size versus mission duration were reviewed and common definitions of key habitability terms were established. An accurately dimensioned drawing of the orbiter mid-deck, locating all of the known major elements was developed. Finally, it was established that orbiter duration and crew size can be increased with minimum modification and impact to the crew module. Preliminary concepts of the aft med-deck, external versions of expanded tunnel adapters (ETA), and interior concepts of ETA-3 were developed and comparison charts showing the various factors of volume, weight, duration, size, impact to orbiter, and number of sleep stations were generated.
Mars Sample Return Spacecraft Before Arrival Artist Concept
2011-06-20
This artist concept of a proposed Mars sample return mission portrays an aeroshell-encased spacecraft approaching Mars. This spacecraft would put a sample-retrieving rover and an ascent vehicle onto the surface of Mars.
Basic principles and recent observations of rotationally sampled wind
NASA Technical Reports Server (NTRS)
Connell, James R.
1995-01-01
The concept of rotationally sampled wind speed is described. The unusual wind characteristics that result from rotationally sampling the wind are shown first for early measurements made using an 8-point ring of anemometers on a vertical plane array of meteorological towers. Quantitative characterization of the rotationally sampled wind is made in terms of the power spectral density function of the wind speed. Verification of the importance of the new concept is demonstrated with spectral analyses of the response of the MOD-OA blade flapwise root bending moment and the corresponding rotational analysis of the wind measured immediately upwind of the MOD-OA using a 12-point ring of anemometers on a 7-tower vertical plane array. The Pacific Northwest Laboratory (PNL) theory of the rotationally sampled wind speed power spectral density function is tested successfully against the wind spectrum measured at the MOD-OA vertical plane array. A single-tower empirical model of the rotationally sampled wind speed is also successfully tested against the measurements from the full vertical plane array. Rotational measurements of the wind velocity with hotfilm anemometers attached to rotating blades are shown to be accurate and practical for research on winds at the blades of wind turbines. Some measurements at the rotor blade of a MOD-2 turbine using the hotfilm technique in a pilot research program are shown. They are compared and contrasted to the expectations based upon application of the PNL theory of rotationally sampled wind to the MOD-2 size and rotation rate but without teeter, blade bending, or rotor induction accounted for. Finally, the importance of temperature layering and of wind modifications due to flow over complex terrain is demonstrated by the use of hotfilm anemometer data, and meteorological tower and acoustic doppler sounder data from the MOD-2 site at Goodnoe Hills, Washington.
Automated semantic indexing of figure captions to improve radiology image retrieval.
Kahn, Charles E; Rubin, Daniel L
2009-01-01
We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.
Dimensionality and the sample unit
Francis A. Roesch
2009-01-01
The sample unit and its implications for the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis program are discussed in light of a generalized three-dimensional concept of continuous forest inventories. The concept views the sampled population as a spatial-temporal cube and the sample as a finite partitioning of the cube. The sample...
Jackson MSc, Richard G.; Ball, Michael; Patel, Rashmi; Hayes, Richard D.; Dobson, Richard J.B.; Stewart, Robert
2014-01-01
Observational research using data from electronic health records (EHR) is a rapidly growing area, which promises both increased sample size and data richness - therefore unprecedented study power. However, in many medical domains, large amounts of potentially valuable data are contained within the free text clinical narrative. Manually reviewing free text to obtain desired information is an inefficient use of researcher time and skill. Previous work has demonstrated the feasibility of applying Natural Language Processing (NLP) to extract information. However, in real world research environments, the demand for NLP skills outweighs supply, creating a bottleneck in the secondary exploitation of the EHR. To address this, we present TextHunter, a tool for the creation of training data, construction of concept extraction machine learning models and their application to documents. Using confidence thresholds to ensure high precision (>90%), we achieved recall measurements as high as 99% in real world use cases. PMID:25954379
Thompson, Marilyn E; Ford, Ruth; Webster, Andrew
2011-01-01
Neurological concepts applicable to a doctorate in occupational therapy are often challenging to comprehend, and students are required to demonstrate critical reasoning skills beyond simply recalling the information. To achieve this, various learning and teaching strategies are used, including the use of technology in the classroom. The availability of technology in academic settings has allowed for diverse and active teaching approaches. This includes videos, web-based instruction, and interactive online games. In this quantitative pre-experimental analysis, the learning and retention of neuroscience concepts by 30 occupational therapy doctoral students, who participated in an interactive online learning experience, were assessed. The results suggest that student use of these tools may enhance their learning of neuroscience. Furthermore, the students felt that the sites were appropriate, beneficial to them, and easy to use. Thus, the use of online, interactive neuroscience games may be effective in reinforcing lecture materials. This needs to be further assessed in a larger sample size.
Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J
2017-08-01
Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
NASA Technical Reports Server (NTRS)
Hecht, M. H.; Meloy, T. P.; Anderson, M. S.; Buehler, M. G.; Frant, M. A.; Grannan, S. M.; Fuerstenau, S. D.; Keller, H. U.; Markiewicz, W. J.; Marshall, J.
1999-01-01
The Mars Environmental Compatibility Assessment (MECA) will evaluate the Martian environment for soil and dust-related hazards to human exploration as part of the Mars Surveyor Program 2001 Lander. The integrated MECA payload contains a wet-chemistry laboratory, a microscopy station, an electrometer to characterize the electrostatic environment, and arrays of material patches to study abrasion and adhesion. Heritage will be all-important for low cost micro-missions, and adaptations of instruments developed for the Pathfinder, '98 and '01 Landers should be strong contenders for '03 flights. This talk has three objectives: (1) Familiarize the audience with MECA instrument capabilities; (2) present concepts for stand-alone and/or mobile versions of MECA instruments; and (3) broaden the context of the MECA instruments from human exploration to a comprehensive scientific survey of Mars. Due to time limitations, emphasis will be on the chemistry and microscopy experiments. Ion-selective electrodes and related sensors in MECA's wet-chemistry laboratory will evaluate total dissolved solids, redox potential, pH, and the concentration of many soluble ions and gases in wet Martian soil. These electrodes can detect potentially dangerous heavy-metal ions, emitted pathogenic gases, and the soil's corrosive potential, and experiments will include cyclic voltammetry and anodic stripping. For experiments beyond 2001, enhancements could allow multiple use of the cells (for mobile experiments) and reagent addition (for quantitative mineralogical and exobiological analysis). MECA's microscopy station combines optical and atomic-force microscopy (AFM) in an actively focused, controlled illumination environment to image particles from millimeters to nanometers in size. Careful selection of substrates allows controlled experiments in adhesion, abrasion, hardness, aggregation, magnetic and other properties. Special tools allow primitive manipulation (brushing and scraping) of samples. Soil particle properties including size, shape, color, hardness, adhesive potential (electrostatic and magnetic), will be determined using an array of sample receptacles and collection substrates. The simple, rugged atomic-force microscope will image in the submicron size range and has the capability of performing a particle-by-particle analysis of the dust and soil. Future implementations might enhance the optical microscopy with spectroscopy, or incorporate advanced AFM techniques for thermogravimetric and chemical analysis.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Metacognitive executive function training for young children with ADHD: a proof-of-concept study.
Tamm, Leanne; Nakonezny, Paul A
2015-09-01
Executive functions (EF) are impaired in children with attention-deficit/hyperactivity disorder (ADHD). It may be especially critical for interventions to target EF in early childhood given the developmental progression of EF deficits that may contribute to later functional impairments. This proof-of-concept study examined the initial efficacy of an intervention program on EF and ADHD. We also examined child performance on three neurocognitive tasks assessing cognitive flexibility, auditory/visual attention, and sustained/selective attention. Children with ADHD (ages 3-7) and their parents were randomized to receive an intervention targeting metacognitive EF deficits (n = 13) or to a waitlist control condition (n = 12). Linear model analysis of covariance compared groups on parent EF ratings, blinded clinician ratings of ADHD symptoms and improvement, and child performance on neurocognitive measures. Children who received the intervention significantly improved on parent ratings of attention shifting and emotion regulation in addition to clinician ratings of inattention. Moderate effect sizes showed additional intervention effects on parent ratings of inhibition, memory, and planning, and clinician ratings of hyperactivity/impulsivity and overall improvement. Small effect sizes were observed for improvement on child neurocognitive measures. Although replication with a larger sample and an active control group is needed, EF training with a metacognitive focus is a potentially promising intervention for young children with ADHD.
An Investigation of the Effectiveness of Concept Mapping on Turkish Students' Academic Success
ERIC Educational Resources Information Center
Erdogan, Yavuz
2016-01-01
This paper investigates the experimental studies which test the effectiveness of the concept mapping instructional strategy compared to the traditional teaching method. Meta-analysis was used to calculate the effect size of the concept mapping strategy on academic success. Therefore, the analysis includes experimental studies conducted in Turkey…
Developing a Hypothetical Learning Trajectory for the Sampling Distribution of the Sample Means
NASA Astrophysics Data System (ADS)
Syafriandi
2018-04-01
Special types of probability distribution are sampling distributions that are important in hypothesis testing. The concept of a sampling distribution may well be the key concept in understanding how inferential procedures work. In this paper, we will design a hypothetical learning trajectory (HLT) for the sampling distribution of the sample mean, and we will discuss how the sampling distribution is used in hypothesis testing.
NASA Astrophysics Data System (ADS)
Tretter, Thomas R.; Jones, M. Gail; Andre, Thomas; Negishi, Atsuko; Minogue, James
2006-03-01
To reduce curricular fragmentation in science education, reform recommendations include using common, unifying themes such as scaling to enhance curricular coherence. This study involved 215 participants from five groups (grades 5, 7, 9, and 12, and doctoral students), who completed written assessments and card sort tasks related to their conceptions of size and scale, and then completed individual interviews. Results triangulated from the data sources revealed the boundaries between and characteristics of scale size ranges that are well distinguished from each other for each group. Results indicate that relative size information was more readily understood than exact size, and significant size landmarks were used to anchor this relational web of scales. The nature of past experiences situated along two dimensions - from visual to kinesthetic in one dimension, and wholistic to sequential in the other - were shown to be key to scale cognition development. Commonalities and differences between the groups are highlighted and discussed.
Small target pre-detection with an attention mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuan; Zhang, Tianxu; Wang, Guoyou
2002-04-01
We introduce the concept of predetection based on an attention mechanism to improve the efficiency of small-target detection by limiting the image region of detection. According to the characteristics of small-target detection, local contrast is taken as the only feature in predetection and a nonlinear sampling model is adopted to make the predetection adaptive to detect small targets with different area sizes. To simplify the predetection itself and decrease the false alarm probability, neighboring nodes in the sampling grid are used to generate a saliency map, and a short-term memory is adopted to accelerate the `pop-out' of targets. We discuss the fact that the proposed approach is simple enough in computational complexity. In addition, even in a cluttered background, attention can be led to targets in a satisfying few iterations, which ensures that the detection efficiency will not be decreased due to false alarms. Experimental results are presented to demonstrate the applicability of the approach.
A Comparison of a Solar Power Satellite Concept to a Concentrating Solar Power System
NASA Technical Reports Server (NTRS)
Smitherman, David V.
2013-01-01
A comparison is made of a solar power satellite (SPS) concept in geostationary Earth orbit to a concentrating solar power (CSP) system on the ground to analyze overall efficiencies of each infrastructure from solar radiance at 1 AU to conversion and transmission of electrical energy into the power grid on the Earth's surface. Each system is sized for a 1-gigawatt output to the power grid and then further analyzed to determine primary collector infrastructure areas. Findings indicate that even though the SPS concept has a higher end-to-end efficiency, the combined space and ground collector infrastructure is still about the same size as a comparable CSP system on the ground.
The Mechanical Properties of Candidate Superalloys for a Hybrid Turbine Disk
NASA Technical Reports Server (NTRS)
Gabb, Timothy P.; MacKay, Rebecca A.; Draper, Susan L.; Sudbrack, Chantal K.; Nathal, Michael V.
2013-01-01
The mechanical properties of several cast blade superalloys and one powder metallurgy disk superalloy were assessed for potential use in a dual alloy hybrid disk concept of joined dissimilar bore and web materials. Grain size was varied for each superalloy class. Tensile, creep, fatigue, and notch fatigue tests were performed at 704 to 815 degC. Typical microstructures and failure modes were determined. Preferred materials were then selected for future study as the bore and rim alloys in this hybrid disk concept. Powder metallurgy superalloy LSHR at 15 micron grain size and single crystal superalloy LDS-1101+Hf were selected for further study, and future work is recommended to develop the hybrid disk concept.
Design of Fiber Reinforced Foam Sandwich Panels for Large Ares V Structural Applications
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.; Hopkins, Dale A.
2010-01-01
The preliminary design of three major structural components within NASA's Ares V heavy lift vehicle using a novel fiber reinforced foam composite sandwich panel concept is presented. The Ares V payload shroud, interstage, and core intertank are designed for minimum mass using this panel concept, which consists of integral composite webs separated by structural foam between two composite facesheets. The HyperSizer structural sizing software, in conjunction with NASTRAN finite element analyses, is used. However, since HyperSizer does not currently include a panel concept for fiber reinforced foam, the sizing was performed using two separate approaches. In the first, the panel core is treated as an effective (homogenized) material, whose properties are provided by the vendor. In the second approach, the panel is treated as a blade stiffened sandwich panel, with the mass of the foam added after completion of the panel sizing. Details of the sizing for each of the three Ares V components are given, and it is demonstrated that the two panel sizing approaches are in reasonable agreement for thinner panel designs, but as the panel thickness increases, the blade stiffened sandwich panel approach yields heavier panel designs. This is due to the effects of local buckling, which are not considered in the effective core property approach.
Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval
Kahn, Charles E.; Rubin, Daniel L.
2009-01-01
Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938
Possible Disintegrating Planet Artist Concept
2012-05-21
This artist concept depicts a comet-like tail of a possible disintegrating super Mercury-size planet candidate as it transits, or crosses, its parent star, named KIC 12557548. The results are based on data from NASA Kepler mission.
System Concept Study for a Cargo Data Interchange System (CARDIS)
DOT National Transportation Integrated Search
1975-04-01
The report presents the analysis of functional and operational requirements of CARDIS. From these requirements, system sizing estimates are derived. Three potential CARDIS concepts are introduced for consideration in subsequent analysis. Their charac...
Effective population size and genetic conservation criteria for bull trout
Bruce E. Rieman; F. W. Allendorf
2001-01-01
Effective population size (Ne) is an important concept in the management of threatened species like bull trout Salvelinus confluentus. General guidelines suggest that effective population sizes of 50 or 500 are essential to minimize inbreeding effects or maintain adaptive genetic variation, respectively....
Symbol, Brittany; Ricci, Andrew
2018-04-23
Due to the potential for atypia (atypical ductal or lobular hyperplasia) or carcinoma (in situ or invasive) on excision, aggressive reflex surgical excision protocols following core biopsy diagnosis of papillary lesions of the breast (ie, intraductal papilloma) are commonplace. Concepts in risk stratification, including radiologic-pathologic correlation, are emerging in an effort to curb unnecessary surgeries. To this end, we examined all excised intraductal papillomas diagnosed at our institution from 2010-2015 (N = 336) and found an overall atypia rate of 20%. To investigate further, we stratified all excised papillomas according to total lesion size (range = 1-40 mm) and found that the atypia rate for lesions ≤1.2 cm (16% with atypia) was statistically significantly lower (P = .008) than the atypia rate for lesions >1.2 cm (36% with atypia). To explore to effects of radiologic-pathologic correlation on the ability of the core biopsy to accurately predict nonatypical lesions we assessed thirteen consecutive paired nonatypical core biopsy/follow-up surgical excision specimens for the percent of the total lesion (on imaging) sampled by the core biopsy (measured histologically). None of the thirteen paired specimens showed upgrade on excision (0/13); the percent of total lesion sampled by biopsy in this cohort averaged 59%. We propose that in the absence of discordant clinical/radiological findings, small lesions (≤1.2 cm) with radiologic-pathologic concordance (>50% sampling of total lesion by core biopsy) may safely forego surgery for close clinical and radiographic follow-up. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sandford, S. A.; Chabot, N. L.; Dello Russo, N.; Leary, J. C.; Reynolds, E. L.; Weaver, H. A.; Wooden, D. H.
2017-07-01
CORSAIR (COmet Rendezvous, Sample Acquisition, Investigation, and Return) is a mission concept submitted in response to NASA's New Frontiers 4 call. CORSAIR's proposed mission is to return comet nucleus samples to Earth for detailed analysis.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
A generalized sizing method for revolutionary concepts under probabilistic design constraints
NASA Astrophysics Data System (ADS)
Nam, Taewoo
Internal combustion (IC) engines that consume hydrocarbon fuels have dominated the propulsion systems of air-vehicles for the first century of aviation. In recent years, however, growing concern over rapid climate changes and national energy security has galvanized the aerospace community into delving into new alternatives that could challenge the dominance of the IC engine. Nevertheless, traditional aircraft sizing methods have significant shortcomings for the design of such unconventionally powered aircraft. First, the methods are specialized for aircraft powered by IC engines, and thus are not flexible enough to assess revolutionary propulsion concepts that produce propulsive thrust through a completely different energy conversion process. Another deficiency associated with the traditional methods is that a user of these methods must rely heavily on experts' experience and advice for determining appropriate design margins. However, the introduction of revolutionary propulsion systems and energy sources is very likely to entail an unconventional aircraft configuration, which inexorably disqualifies the conjecture of such "connoisseurs" as a means of risk management. Motivated by such deficiencies, this dissertation aims at advancing two aspects of aircraft sizing: (1) to develop a generalized aircraft sizing formulation applicable to a wide range of unconventionally powered aircraft concepts and (2) to formulate a probabilistic optimization technique that is able to quantify appropriate design margins that are tailored towards the level of risk deemed acceptable to a decision maker. A more generalized aircraft sizing formulation, named the Architecture Independent Aircraft Sizing Method (AIASM), was developed for sizing revolutionary aircraft powered by alternative energy sources by modifying several assumptions of the traditional aircraft sizing method. Along with advances in deterministic aircraft sizing, a non-deterministic sizing technique, named the Probabilistic Aircraft Sizing Method (PASM), was developed. The method allows one to quantify adequate design margins to account for the various sources of uncertainty via the application of the chance-constrained programming (CCP) strategy to AIASM. In this way, PASM can also provide insights into a good compromise between cost and safety.
Bressington, Daniel T; Wong, Wai-Kit; Lam, Kar Kei Claire; Chien, Wai Tong
2018-01-01
Student nurses are provided with a great deal of knowledge within university, but they can find it difficult to relate theory to nursing practice. This study aimed to test the appropriateness and feasibility of assessing Novak's concept mapping as an educational strategy to strengthen the theory-practice link, encourage meaningful learning and enhance learning self-efficacy in nursing students. This pilot study utilised a mixed-methods quasi-experimental design. The study was conducted in a University school of Nursing in Hong Kong. A total of 40 third-year pre-registration Asian mental health nursing students completed the study; 12 in the concept mapping (CM) group and 28 in the usual teaching methods (UTM) group. The impact of concept mapping was evaluated thorough analysis of quantitative changes in students' learning self-efficacy, analysis of the structure and contents of the concept maps (CM group), a quantitative measure of students' opinions about their reflective learning activities and content analysis of qualitative data from reflective written accounts (CM group). There were no significant differences in self-reported learning self-efficacy between the two groups (p=0.38). The concept mapping helped students identify their current level of understanding, but the increased awareness may cause an initial drop in learning self-efficacy. The results highlight that most CM students were able to demonstrate meaningful learning and perceived that concept mapping was a useful reflective learning strategy to help them to link theory and practice. The results provide preliminary evidence that the concept mapping approach can be useful to help mental health nursing students visualise their learning progress and encourage the integration of theoretical knowledge with clinical knowledge. Combining concept mapping data with quantitative measures and qualitative reflective journal data appears to be a useful way of assessing and understanding the effectiveness of concept mapping. Future studies should utilise a larger sample size and consider using the approach as a targeted intervention immediately before and during clinical learning placements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Self-Concept and Academic Achievement: A Meta-Analysis of Longitudinal Relations
ERIC Educational Resources Information Center
Huang, Chiungjung
2011-01-01
The relation between self-concept and academic achievement was examined in 39 independent and longitudinal samples through the integration of meta-analysis and path analysis procedures. For relations with more than 3 independent samples, the mean observed correlations ranged from 0.20 to 0.27 between prior self-concept and subsequent academic…
Teaching the Concept of the Sampling Distribution of the Mean
ERIC Educational Resources Information Center
Aguinis, Herman; Branstetter, Steven A.
2007-01-01
The authors use proven cognitive and learning principles and recent developments in the field of educational psychology to teach the concept of the sampling distribution of the mean, which is arguably one of the most central concepts in inferential statistics. The proposed pedagogical approach relies on cognitive load, contiguity, and experiential…
The Rocky World of Young Planetary Systems Artist Concept
2004-10-18
This artist concept illustrates how planetary systems arise out of massive collisions between rocky bodies. NASA Spitzer Space Telescope show that these catastrophes continue to occur around stars even after they have developed full-sized planets.
The Concept of Ionic Strength Eighty Years after Its Introduction in Chemistry
ERIC Educational Resources Information Center
Manuel E. Sastre de Vicente
2004-01-01
Some comments on the relationship of ionic strength to macroscopic concepts such as thermodynamic quantities and microscopic ones such as molecule size are presented. The meaning of ionic strength is also reviewed.
Worlds on the Edge Artist Concept
2010-08-26
This artist concept illustrates the two Saturn-sized planets discovered by NASA Kepler mission. The star system is oriented edge-on, as seen by Kepler, such that both planets cross in front, or transit, their star, named Kepler-9.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Advanced space solar dynamic receivers
NASA Technical Reports Server (NTRS)
Strumpf, Hal J.; Coombs, Murray G.; Lacy, Dovie E.
1988-01-01
A study has been conducted to generate and evaluate advanced solar heat receiver concepts suitable for orbital application with Brayton and Stirling engine cycles in the 7-kW size range. The generated receiver designs have thermal storage capability (to enable power production during the substantial eclipse period which accompanies typical orbits) and are lighter and smaller than state-of-the-art systems, such as the Brayton solar receiver being designed and developed by AiResearch for the NASA Space Station. Two receiver concepts have been developed in detail: a packed bed receiver and a heat pipe receiver. The packed bed receiver is appropriate for a Brayton engine; the heat pipe receiver is applicable for either a Brayton or Stirling engine. The thermal storage for both concepts is provided by the melting and freezing of a salt. Both receiver concepts offer substantial improvements in size and weight compared to baseline receivers.
Descent Assisted Split Habitat Lunar Lander Concept
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Goodliff, Kandyce; Cornelius, David M.
2008-01-01
The Descent Assisted Split Habitat (DASH) lunar lander concept utilizes a disposable braking stage for descent and a minimally sized pressurized volume for crew transport to and from the lunar surface. The lander can also be configured to perform autonomous cargo missions. Although a braking-stage approach represents a significantly different operational concept compared with a traditional two-stage lander, the DASH lander offers many important benefits. These benefits include improved crew egress/ingress and large-cargo unloading; excellent surface visibility during landing; elimination of the need for deep-throttling descent engines; potentially reduced plume-surface interactions and lower vertical touchdown velocity; and reduced lander gross mass through efficient mass staging and volume segmentation. This paper documents the conceptual study on various aspects of the design, including development of sortie and outpost lander configurations and a mission concept of operations; the initial descent trajectory design; the initial spacecraft sizing estimates and subsystem design; and the identification of technology needs
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Hybrid propulsion technology program. Volume 1: Conceptional design package
NASA Technical Reports Server (NTRS)
Jensen, Gordon E.; Holzman, Allen L.; Leisch, Steven O.; Keilbach, Joseph; Parsley, Randy; Humphrey, John
1989-01-01
A concept design study was performed to configure two sizes of hybrid boosters; one which duplicates the advanced shuttle rocket motor vacuum thrust time curve and a smaller, quarter thrust level booster. Two sizes of hybrid boosters were configured for either pump-fed or pressure-fed oxygen feed systems. Performance analyses show improved payload capability relative to a solid propellant booster. Size optimization and fuel safety considerations resulted in a 4.57 m (180 inch) diameter large booster with an inert hydrocarbon fuel. The preferred diameter for the quarter thrust level booster is 2.53 m (96 inches). As part of the design study critical technology issues were identified and a technology acquisition and demonstration plan was formulated.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
NASA Astrophysics Data System (ADS)
Sambeka, Yana; Nahadi, Sriyati, Siti
2017-05-01
The study aimed to obtain the scientific information about increase of student's concept mastering in project based learning that used authentic assessment. The research was conducted in May 2016 at one of junior high school in Bandung in the academic year of 2015/2016. The research method was weak experiment with the one-group pretest-posttest design. The sample was taken by random cluster sampling technique and the sample was 24 students. Data collected through instruments, i.e. written test, observation sheet, and questionnaire sheet. Student's concept mastering test obtained N-Gain of 0.236 with the low category. Based on the result of paired sample t-test showed that implementation of authentic assessment in the project based learning increased student's concept mastering significantly, (sig<0.05).
SlimCS—compact low aspect ratio DEMO reactor with reduced-size central solenoid
NASA Astrophysics Data System (ADS)
Tobita, K.; Nishio, S.; Sato, M.; Sakurai, S.; Hayashi, T.; Shibama, Y. K.; Isono, T.; Enoeda, M.; Nakamura, H.; Sato, S.; Ezato, K.; Hayashi, T.; Hirose, T.; Ide, S.; Inoue, T.; Kamada, Y.; Kawamura, Y.; Kawashima, H.; Koizumi, N.; Kurita, G.; Nakamura, Y.; Mouri, K.; Nishitani, T.; Ohmori, J.; Oyama, N.; Sakamoto, K.; Suzuki, S.; Suzuki, T.; Tanigawa, H.; Tsuchiya, K.; Tsuru, D.
2007-08-01
The concept for a compact DEMO reactor named 'SlimCS' is presented. Distinctive features of the concept are low aspect ratio (A = 2.6) and use of a reduced-size centre solenoid (CS) which has the function of plasma shaping rather than poloidal flux supply. The reduced-size CS enables us to introduce a thin toroidal field coil system which contributes to reducing the weight and perhaps lessening the construction cost. Low-A has merits of vertical stability for high elongation (κ) and high normalized beta (βN), which leads to a high power density with reasonable physics requirements. This is because high κ facilitates high nGW (because of an increase in Ip), which allows efficient use of the capacity of high βN. From an engineering aspect, low-A may ensure ease in designing blanket modules robust to electromagnetic forces acting on disruptions. Thus, a superconducting low-A tokamak reactor such as SlimCS can be a promising DEMO concept with physics and engineering advantages.
Measuring β-diversity with species abundance data.
Barwell, Louise J; Isaac, Nick J B; Kunin, William E
2015-07-01
In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
NASA Astrophysics Data System (ADS)
Parro, V.; Rivas, L. A.; Rodríguez-Manfredi, J. A.; Blanco, Y.; de Diego-Castilla, G.; Cruz-Gil, P.; Moreno-Paz, M.; García-Villadangos, M.; Compostizo, C.; Herrero, P. L.
2009-04-01
Immunosensors have been extensively used since many years for environmental monitoring. Different technological platforms allow new biosensor designs and implementations. We have reported (Rivas et al., 2008) a shotgun approach for antibody production for biomarker detection in astrobiology and environmental monitoring, the production of 150 new polyclonal antibodies against microbial strains and environmental extracts, and the construction and validation of an antibody microarray (LDCHIP200, for "Life Detector Chip") containing 200 different antibodies. We have successfully used the LDCHIP200 for the detection of biological polymers in extreme environments in different parts of the world (e.g., a deep South African mine, Antarctica's Dry valleys, Yellowstone, Iceland, and Rio Tinto). Clustering analysis associated similar immunopatterns to samples from apparently very different environments, indicating that they indeed share similar universal biomarkers. A redundancy in the number of antibodies against different target biomarkers apart of revealing the presence of certain biomolecules, it renders a sample-specific immuno-profile, an "immnuno-fingerprint", which may constitute by itself an indirect biosignature. We will present a case study of immunoprofiling different iron-sulfur as well as phylosilicates rich samples along the Rio Tinto river banks. Based on protein microarray technology, we designed and built the concept instrument called SOLID (for "Signs Of LIfe Detector"; Parro et al., 2005; 2008a, b; http://cab.inta.es/solid) for automatic in situ analysis of soil samples and molecular biomarkers detection. A field prototype, SOLID2, was successfully tested for the analysis of grinded core samples during the 2005 "MARTE" campaign of a Mars drilling simulation experiment by a sandwich microarray immunoassay (Parro et al., 2008b). We will show the new version of the instrument (SOLID3) which is able to perform both sandwich and competitive immunoassays. SOLID3 consists of two separate functional units: a Sample Preparation Unit (SPU), for ten different extractions by ultrasonication, and a Sample Analysis Unit (SAU), for fluorescent immunoassays. The SAU consists of ten different flow cells each of one allocate one antibody microarray (up to 2000 spots), and is equipped with an unique designed optical package for fluorescent detection. We demonstrate the performance of SOLID3 for the detection of a broad range of molecular size compounds, from the amino acid size, peptides, proteins, to whole cells and spores, with sensitivities at the ppb level. References Parro, V., et al., 2005. Planetary and Space Science 53: 729-737. Parro, V., et al., 2008a. Space Science Reviews 135: 293-311 Parro, V., et al., 2008b. Astrobiology 8:987-99 Rivas, L. A., et al., 2008. Analytical Chemistry 80: 7970-7979
Iranian Adolescents' Intended Age of Marriage and Desired Family Size.
ERIC Educational Resources Information Center
Tashakkori, Abbas; And Others
1987-01-01
Examined questionnaire data pertaining to intended age of marriage and desired family size from Iranian 12th graders. Proximal factors (individual level variables such as self-concept and school success) were stronger predictors on both dependent measures than were distal factors (parental education, sibling size, and family modernity). Proximal…
NASA Astrophysics Data System (ADS)
Pardimin, H.; Arcana, N.
2018-01-01
Many types of research in the field of mathematics education apply the Quasi-Experimental method and statistical analysis use t-test. Quasi-experiment has a weakness that is difficult to fulfil “the law of a single independent variable”. T-test also has a weakness that is a generalization of the conclusions obtained is less powerful. This research aimed to find ways to reduce the weaknesses of the Quasi-experimental method and improved the generalization of the research results. The method applied in the research was a non-interactive qualitative method, and the type was concept analysis. Concepts analysed are the concept of statistics, research methods of education, and research reports. The result represented a way to overcome the weaknesses of quasi-Experiments and T-test. In addition, the way was to apply a combination of Factorial Design and Balanced Design, which the authors refer to as Factorial-Balanced Design. The advantages of this design are: (1) almost fulfilling “the low of single independent variable” so no need to test the similarity of the academic ability, (2) the sample size of the experimental group and the control group became larger and equal; so it becomes robust to deal with violations of the assumptions of the ANOVA test.
NASA Technical Reports Server (NTRS)
1972-01-01
Materials and design technology of the all-silica LI-900 rigid surface insulation (RSI) thermal protection system (TPS) concept for the shuttle spacecraft is presented. All results of contract development efforts are documented. Engineering design and analysis of RSI strain arrestor plate material selections, sizing, and weight studies are reported. A shuttle prototype test panel was designed, analyzed, fabricated, and delivered. Thermophysical and mechanical properties of LI-900 were experimentally established and reported. Environmental tests, including simulations of shuttle loads represented by thermal response, turbulent duct, convective cycling, and chemical tolerance tests are described and results reported. Descriptions of material test samples and panels fabricated for testing are included. Descriptions of analytical sizing and design procedures are presented in a manner formulated to allow competent engineering organizations to perform rational design studies. Results of parametric studies involving material and system variables are reported. Material performance and design data are also delineated.
NASA Astrophysics Data System (ADS)
Oktavianty, E.; Haratua, T. M. S.; Anuru, M.
2018-05-01
The purpose of this study is to compare the effects of various remediation practices in reducing the number of student misconceptions on physics concepts. This research synthesizes 68 thesis undergraduate students of physics education which are published in Tanjungpura University library 2009-2016 period. In this study, the guidance in the form of checklist in conducting the study arranged to facilitate the understanding and assessment of the scientific work. Based on the analysis result, the average of effect size of all the synthesized thesis is 1.13. There are six forms of remedial misconceptions performed by physics education students, such as re-learning, feedback, integration of remediation in learning, physical activity, utilization of other learning resources and interviews. In addition, sampling techniques and test reliability were have contributed to the effect size of the study. Therefore, it is expected that the results of this study can be considered in preparing the remediation of misconceptions on physics learning in the future.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
An Unwelcome Place for New Stars artist concept
2006-08-23
This artist concept depicts a supermassive black hole at the center of a galaxy. NASA Galaxy Evolution Explorer found evidence that black holes once they grow to a critical size stifle the formation of new stars in elliptical galaxies.
Climbing the Cosmic Distance Ladder Artist Concept
2012-10-03
Astronomers using NASA Spitzer Space Telescope have greatly improved the cosmic distance ladder used to measure the expansion rate of the universe, its size and age. This artist concept symbolically shows a series of stars that have known distances.
Galactic Hearts of Glass Artist Concept
2006-02-15
This artist concept based on data fromNASA Spitzer Space Telescope shows delicate greenish crystals sprinkled throughout the violent core of a pair of colliding galaxies. The white spots represent a thriving population of stars of all sizes and ages.
Advanced Placement Economics. Macroeconomics: Student Activities.
ERIC Educational Resources Information Center
Morton, John S.
This book is designed to help advanced placement students better understand macroeconomic concepts through various activities. The book contains 6 units with 64 activities, sample multiple-choice questions, sample short essay questions, and sample long essay questions. The units are entitled: (1) "Basic Economic Concepts"; (2) "Measuring Economic…
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Understanding the role of conscientiousness in healthy aging: where does the brain come in?
Patrick, Christopher J
2014-05-01
In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2014-01-01
A thermal design concept of attaching the thermoelectric cooler (TEC) hot side directly to the radiator and maximizing the number of TECs to cool multiple detectors in space is presented. It minimizes the temperature drop between the TECs and radiator. An ethane constant conductance heat pipe transfers heat from the detectors to a TEC cold plate which the cold side of the TECs is attached to. This thermal design concept minimizes the size of TEC heat rejection systems. Hence it reduces the problem of accommodating the radiator within a required envelope. It also reduces the mass of the TEC heat rejection system. Thermal testing of a demonstration unit in vacuum verified the thermal performance of the thermal design concept.
A Comparison Of A Solar Power Satellite Concept To A Concentrating Solar Power System
NASA Technical Reports Server (NTRS)
Smitherman, David V.
2013-01-01
A comparison is made of a Solar Power Satellite concept in geostationary Earth orbit to a Concentrating Solar Power system on the ground to analyze overall efficiencies of each infrastructure from solar radiance at 1 AU to conversion and transmission of electrical energy into the power grid on the Earth's surface. Each system is sized for a 1-gigawatt output to the power grid and then further analyzed to determine primary collector infrastructure areas. Findings indicate that even though the Solar Power Satellite concept has a higher end-to-end efficiency, that the combined space and ground collector infrastructure is still about the same size as a comparable Concentrating Solar Power system on the ground.
ERIC Educational Resources Information Center
Lee, Ji-Eun; Kim, Kyoung-Tae
2016-01-01
This study aimed to explore pre-service elementary teachers' (PSTs') conceptions of effective teacher talk in mathematics instruction, which were interpreted primarily based on the concept of communicative approach. This was accomplished through a task that involves analyzing and evaluating a sample teacher-student dialogue. This study…
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
Red mud flocculation process in alumina production
NASA Astrophysics Data System (ADS)
Fedorova, E. R.; Firsov, A. Yu
2018-05-01
The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Development of deployable structures for large space platform systems, volume 1
NASA Technical Reports Server (NTRS)
1982-01-01
Generic deployable spacecraft configurations and deployable platform systems concepts were identified. Sizing, building block concepts, orbiter packaging, thermal analysis, cost analysis, and mass properties analysis as related to platform systems integration are considered. Technology needs are examined and the major criteria used in concept selection are delineated. Requirements for deployable habitat modules, tunnels, and OTV hangars are considered.
Influence of cornual insemination on conception in dairy cattle.
Senger, P L; Becker, W C; Davidge, S T; Hillers, J K; Reeves, J J
1988-11-01
The objective of this study was to compare conception to artificial insemination (AI) services in dairy cattle when semen was deposited into the uterine body or into both uterine horns (cornual insemination). Nine herdsman inseminators (HI) in four commercial dairy herds in Washington constituted the experimental units. Herds ranged in size from 393 cows to 964 cows. The duration of the experiment was 12 mo in three herds and 18 mo in the fourth herd. At the beginning of the experiment all inseminators were trained to deposit semen in the body of the uterus. Inseminators were instructed to use this method for 6 mo. Following employment of body deposition, the same inseminators were retrained to deposit one-half of the semen into the right uterine horn and one-half into the left uterine horn. Cornual inseminations were performed for 6 mo. A total of 4,178 services constituted the data set. Milk samples were collected from cows on the day of insemination and later were assayed for progesterone (P4). There was variation (P less than .01) in conception associated with month of insemination and insemination method (P less than .001). The monthly variation was not associated with season of the year. Least squares means for conception when semen was deposited in the uterine body was 44.7%, compared with 64.6% when cornual insemination was employed. The insemination treatment X inseminator interaction was not significant. Results suggest that cornual insemination provides an alternative to deposition of semen in the uterine body.
Comet coma sample return instrument
NASA Technical Reports Server (NTRS)
Albee, A. L.; Brownlee, Don E.; Burnett, Donald S.; Tsou, Peter; Uesugi, K. T.
1994-01-01
The sample collection technology and instrument concept for the Sample of Comet Coma Earth Return Mission (SOCCER) are described. The scientific goals of this Flyby Sample Return are to return to coma dust and volatile samples from a known comet source, which will permit accurate elemental and isotopic measurements for thousands of individual solid particles and volatiles, detailed analysis of the dust structure, morphology, and mineralogy of the intact samples, and identification of the biogenic elements or compounds in the solid and volatile samples. Having these intact samples, morphologic, petrographic, and phase structural features can be determined. Information on dust particle size, shape, and density can be ascertained by analyzing penetration holes and tracks in the capture medium. Time and spatial data of dust capture will provide understanding of the flux dynamics of the coma and the jets. Additional information will include the identification of cosmic ray tracks in the cometary grains, which can provide a particle's process history and perhaps even the age of the comet. The measurements will be made with the same equipment used for studying micrometeorites for decades past; hence, the results can be directly compared without extrapolation or modification. The data will provide a powerful and direct technique for comparing the cometary samples with all known types of meteorites and interplanetary dust. This sample collection system will provide the first sample return from a specifically identified primitive body and will allow, for the first time, a direct method of matching meteoritic materials captured on Earth with known parent bodies.
Gender Differences in Eating Behavior and Social Self Concept among Malaysian University Students.
Khor, Geoklin; Cobiac, Lynne; Skrzypiec, Grace
2002-03-01
University students may encounter personal, family, social, and financial stresses while trying to cope with their academic challenges. Such constraints could affect their eating behavior and health status which, in turn may have negative effects on their studies. In light of little information in Malaysia on this subject, this study was undertaken on a sample of 180 students pursuing different academic programs in a Malaysian university. The study objectives were to determine the students' eating behavior including body weight control and the extent of fear of being fat, their social self concept that reflects the five selves namely, the psychological self, the social self, the sexual self, the family self and the physical self. Eating behavior and social self concept were determined based on various methods previously validated in studies on young adults in Asia and Australia. This article focuses on gender comparisons for these determinants. The results showed that psychological and emotional factors have a significant bearing on the eating behavior of university students. Uninhibited eating behavior of both the males and females showed significant and negative correlations with feelings pertaining to personal worth, the physical self, and their relationships with peers and families. Gender differences were manifested for some determinants. The females showed more restrained eating behavior than the males; the females have a significantly higher score for family relationship, which appears to be a significant factor on male students' eating behavior. Future studies on a larger sample size may help to unravel the extent to which psychological factors influence eating behavior of students, and the underlying psychosocial basis for some of the gender differences reported in this study.
The all-on-four treatment concept: Systematic review
Soto-Penaloza, David; Zaragozí-Alonso, Regino; Penarrocha-Diago, María
2017-01-01
Objectives To systematically review the literature on the “all-on-four” treatment concept regarding its indications, surgical procedures, prosthetic protocols and technical and biological complications after at least three years in function. Study Design The three major electronic databases were screened: MEDLINE (via PubMed), EMBASE, and the Cochrane Library of the Cochrane Collaboration (CENTRAL). In addition, electronic screening was made of the ‘grey literature’ using the System for Information on Grey Literature in Europe - Open Grey, covering the period from January 2005 up to and including April 2016. Results A total of 728 articles were obtained from the initial screening process. Of these articles, 24 fulfilled the inclusion criteria. Methodological quality assessment showed sample size calculation to be reported by only one study, and follow-up did not include a large number of participants - a fact that may introduce bias and lead to misleading interpretations of the study results. Conclusions The all-on-four treatment concept offers a predictable way to treat the atrophic jaw in patients that do not prefer regenerative procedures, which increase morbidity and the treatment fees. The results obtained indicate a survival rate for more than 24 months of 99.8%. However, current evidence is limited due the scarcity of information referred to methodological quality, a lack of adequate follow-up, and sample attrition. Biological complications (e.g., peri-implantitis) are reported in few patients after a mean follow-up of two years. Adequate definition of the success / survival criteria is thus necessary, due the high prevalence of peri-implant diseases. Key words:All-on-four, all-on-4, tilted implants, dental prostheses, immediate loading. PMID:28298995
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NREL Senior Research Fellow Honored by The Journal of Physical Chemistry |
and quantum size effects in semiconductors and carrier dynamics in semiconductor quantum dots and using hot carrier effects, size quantization, and superlattice concepts that could, in principle, enable
Development of Sample Verification System for Sample Return Missions
NASA Technical Reports Server (NTRS)
Toda, Risaku; McKinney, Colin; Jackson, Shannon P.; Mojarradi, Mohammad; Trebi-Ollennu, Ashitey; Manohara, Harish
2011-01-01
This paper describes the development of a proof of-concept sample verification system (SVS) for in-situ mass measurement of planetary rock and soil sample in future robotic sample return missions. Our proof-of-concept SVS device contains a 10 cm diameter pressure sensitive elastic membrane placed at the bottom of a sample canister. The membrane deforms under the weight of accumulating planetary sample. The membrane is positioned in proximity to an opposing substrate with a narrow gap. The deformation of the membrane makes the gap to be narrower, resulting in increased capacitance between the two nearly parallel plates. Capacitance readout circuitry on a nearby printed circuit board (PCB) transmits data via a low-voltage differential signaling (LVDS) interface. The fabricated SVS proof-of-concept device has successfully demonstrated approximately 1pF/gram capacitance change
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
2013-01-01
Background Obesity and mental health problems are prevalent among indigenous children in Canada and the United States. In this cross-sectional study the associations between adiposity and body size satisfaction, body image and self-concept were examined in indigenous children in grades four to six living in Cree communities in the Province of Quebec (Canada). Methods Weight status and body mass index (BMI) z-scores were derived from children’s measured height and weight using the World Health Organization growth reference. Multivariate regression models that included child’s age and sex were used to assess the association between (a) weight status and physical appearance satisfaction using pictorial and verbal body rating measures in 202 of 263 children, and (b) BMI z-score and self-concept measured using the Piers-Harris Children’s Self-Concept Scale in a subset of 78 children. Results Children (10.67 ± 0.98 years) were predominantly overweight (28.2%) or obese (45.0%). Many (40.0%) children had low global self-concept indicating that they had serious doubts about their self-worth and lacked confidence. About one-third (34.7%) of children did not like the way they looked and 46.3% scored low on the physical appearance and attributes domain of self-concept indicating poor self-esteem in relation to their body image and physical strength, feeling unattractive, or being bothered by specific aspects of their physical appearance. Compared to normal weight children, overweight and obese children were more likely to desire being smaller (OR=4.3 and 19.8, respectively), say their body size was too big (OR=7.7 and 30.6, respectively) and not liking the way they looked (OR=2.4 and 7.8, respectively). Higher BMI z-score was associated with lower scores for global self-concept (β=−1.3), intellectual and school status (β=−1.5) and physical appearance and attributes (β=−1.3) indicating negative self-evaluations in these areas. Despite comparable weight status to boys, girls were more likely to have lower scores for global self-concept (β=−3.8), physical appearance and attributes (β=−4.2), desiring to be smaller (OR=4.3) and not liking the way they looked (OR=2.3). Conclusions The psychosocial correlates of obesity are important considerations for indigenous children, particularly girls, given that poor self-concept and body size dissatisfaction negatively impact mental and emotional qualities of life. PMID:23937909
Curiosity Sky Crane Maneuver, Artist Concept
2011-10-03
This artist concept shows the sky crane maneuver during the descent of NASA Curiosity rover to the Martian surface. The sheer size of the rover over one ton, or 900 kilograms would preclude it from taking advantage of an airbag-assisted landing.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
Trajectory Design for a Single-String Impactor Concept
NASA Technical Reports Server (NTRS)
Dono Perez, Andres; Burton, Roland; Stupl, Jan; Mauro, David
2017-01-01
This paper introduces a trajectory design for a secondary spacecraft concept to augment science return in interplanetary missions. The concept consist of a single-string probe with a kinetic impactor on board that generates an artificial plume to perform in-situ sampling. The trajectory design was applied to a particular case study that samples ejecta particles from the Jovian moon Europa. Results were validated using statistical analysis. Details regarding the navigation, targeting and disposal challenges related to this concept are presented herein.
Liu, Fang
2016-01-01
In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
Adjustable Nyquist-rate System for Single-Bit Sigma-Delta ADC with Alternative FIR Architecture
NASA Astrophysics Data System (ADS)
Frick, Vincent; Dadouche, Foudil; Berviller, Hervé
2016-09-01
This paper presents a new smart and compact system dedicated to control the output sampling frequency of an analogue-to-digital converters (ADC) based on single-bit sigma-delta (ΣΔ) modulator. This system dramatically improves the spectral analysis capabilities of power network analysers (power meters) by adjusting the ADC's sampling frequency to the input signal's fundamental frequency with a few parts per million accuracy. The trade-off between straightforwardness and performance that motivated the choice of the ADC's architecture are preliminary discussed. It particularly comes along with design considerations of an ultra-steep direct-form FIR that is optimised in terms of size and operating speed. Thanks to compact standard VHDL language description, the architecture of the proposed system is particularly suitable for application-specific integrated circuit (ASIC) implementation-oriented low-power and low-cost power meter applications. Field programmable gate array (FPGA) prototyping and experimental results validate the adjustable sampling frequency concept. They also show that the system can perform better in terms of implementation and power capabilities compared to dedicated IP resources.
NASA Astrophysics Data System (ADS)
Farics, Éva; Farics, Dávid; Kovács, József; Haas, János
2017-10-01
The main aim of this paper is to determine the depositional environments of an Upper-Eocene coarse-grained clastic succession in the Buda Hills, Hungary. First of all, we measured some commonly used parameters of samples (size, amount, roundness and sphericity) in a much more objective overall and faster way than with traditional measurement approaches, using the newly developed Rock Analyst application. For the multivariate data obtained, we applied Combined Cluster and Discriminant Analysis (CCDA) in order to determine homogeneous groups of the sampling locations based on the quantitative composition of the conglomerate as well as the shape parameters (roundness and sphericity). The result is the spatial pattern of these groups, which assists with the interpretation of the depositional processes. According to our concept, those sampling sites which belong to the same homogeneous groups were likely formed under similar geological circumstances and by similar geological processes. In the Buda Hills, we were able to distinguish various sedimentological environments within the area based on the results: fan, intermittent stream or marine.
Transfer function concept for ultrasonic characterization of material microstructures
NASA Technical Reports Server (NTRS)
Vary, A.; Kautz, H. E.
1986-01-01
The approach given depends on treating material microstructures as elastomechanical filters that have analytically definable transfer functions. These transfer functions can be defined in terms of the frequency dependence of the ultrasonic attenuation coefficient. The transfer function concept provides a basis for synthesizing expressions that characterize polycrystalline materials relative to microstructural factors such as mean grain size, grain-size distribution functions, and grain boundary energy transmission. Although the approach is nonrigorous, it leads to a rational basis for combining the previously mentioned diverse and fragmented equations for ultrasonic attenuation coefficients.
NASA Technical Reports Server (NTRS)
Clement, J. D.; Kirby, K. D.
1973-01-01
Exploratory calculations were performed for several gas core breeder reactor configurations. The computational method involved the use of the MACH-1 one dimensional diffusion theory code and the THERMOS integral transport theory code for thermal cross sections. Computations were performed to analyze thermal breeder concepts and nonbreeder concepts. Analysis of breeders was restricted to the (U-233)-Th breeding cycle, and computations were performed to examine a range of parameters. These parameters include U-233 to hydrogen atom ratio in the gaseous cavity, carbon to thorium atom ratio in the breeding blanket, cavity size, and blanket size.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glatard, Anaïs; Berges, Aliénor; Sahota, Tarjinder
The no-observed-adverse-effect level (NOAEL) of a drug defined from animal studies is important for inferring a maximal safe dose in human. However, several issues are associated with its concept, determination and application. It is confined to the actual doses used in the study; becomes lower with increasing sample size or dose levels; and reflects the risk level seen in the experiment rather than what may be relevant for human. We explored a pharmacometric approach in an attempt to address these issues. We first used simulation to examine the behaviour of the NOAEL values as determined by current common practice; andmore » then fitted the probability of toxicity as a function of treatment duration and dose to data collected from all applicable toxicology studies of a test compound. Our investigation was in the context of an irreversible toxicity that is detected at the end of the study. Simulations illustrated NOAEL's dependency on experimental factors such as dose and sample size, as well as the underlying uncertainty. Modelling the probability as a continuous function of treatment duration and dose simultaneously to data from multiple studies allowed the estimation of the dose, along with its confidence interval, for a maximal risk level that might be deemed as acceptable for human. The model-based data integration also reconciled between-study inconsistency and explicitly provided maximised estimation confidence. Such alternative NOAEL determination method should be explored for its more efficient data use, more quantifiable insight to toxic doses, and the potential for more relevant animal-to-human translation. - Highlights: • Simulations revealed issues with NOAEL concept, determination and application. • Probabilistic modelling was used to address these issues. • The model integrated time-dose-toxicity data from multiple studies. • The approach uses data efficiently and may allow more meaningful human translation.« less
NASA Astrophysics Data System (ADS)
Dunlop, Katherine M.; Jarvis, Toby; Benoit-Bird, Kelly J.; Waluk, Chad M.; Caress, David W.; Thomas, Hans; Smith, Kenneth L.
2018-04-01
Benthopelagic animals are an important component of the deep-sea ecosystem, yet are notoriously difficult to study. Multibeam echosounders (MBES) deployed on autonomous underwater vehicles (AUVs) represent a promising technology for monitoring this elusive fauna at relatively high spatial and temporal resolution. However, application of this remote-sensing technology to the study of small (relative to the sampling resolution), dispersed and mobile animals at depth does not come without significant challenges with respect to data collection, data processing and vessel avoidance. As a proof of concept, we used data from a downward-looking RESON SeaBat 7125 MBES mounted on a Dorado-class AUV to detect and characterise the location and movement of backscattering targets (which were likely to have been individual fish or squid) within 50 m of the seafloor at 800 m depth in Monterey Bay, California. The targets were detected and tracked, enabling their numerical density and movement to be characterised. The results revealed a consistent movement of targets downwards away from the AUV that we interpreted as an avoidance response. The large volume and complexity of the data presented a computational challenge, while reverberation and noise, spatial confounding and a marginal sampling resolution relative to the size of the targets caused difficulties for reliable and comprehensive target detection and tracking. Nevertheless, the results demonstrate that an AUV-mounted MBES has the potential to provide unique and detailed information on the in situ abundance, distribution, size and behaviour of both individual and aggregated deep-sea benthopelagic animals. We provide detailed data-processing information for those interested in working with MBES water-column data, and a critical appraisal of the data in the context of aquatic ecosystem research. We consider future directions for deep-sea water-column echosounding, and reinforce the importance of measures to mitigate vessel avoidance in studies of aquatic ecosystems.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysis of the Touch-And-Go Surface Sampling Concept for Comet Sample Return Missions
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Bayard, David S.; Blackmore, Lars
2012-01-01
This paper studies the Touch-and-Go (TAG) concept for enabling a spacecraft to take a sample from the surface of a small primitive body, such as an asteroid or comet. The idea behind the TAG concept is to let the spacecraft descend to the surface, make contact with the surface for several seconds, and then ascend to a safe location. Sampling would be accomplished by an end-effector that is active during the few seconds of surface contact. The TAG event is one of the most critical events in a primitive body sample-return mission. The purpose of this study is to evaluate the dynamic behavior of a representative spacecraft during the TAG event, i.e., immediately prior, during, and after surface contact of the sampler. The study evaluates the sample-collection performance of the proposed sampling end-effector, in this case a brushwheel sampler, while acquiring material from the surface during the contact. A main result of the study is a guidance and control (G&C) validation of the overall TAG concept, in addition to specific contributions to demonstrating the effectiveness of using nonlinear clutch mechanisms in the sampling arm joints, and increasing the length of the sampling arms to improve robustness.
NASA Astrophysics Data System (ADS)
Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
[History of pharmaceutical packaging in modern Japan. II--Package size of pharmaceuticals].
Hattori, Akira
2014-01-01
When planning pharmaceutical packaging, the package size for the product is important for determining the basic package concept. Initially, the sales unit for herbal medicines was the weight; however in 1868, around the early part of the Meiji era, Japanese and Western units were being used and the sales unit was confusing. Since the Edo era, the packing size for OTC medicines was adopted using weight, numbers, dosage or treatment period. These were devised in various ways in consideration of convenience for the consumer, but the concept was not simple. In 1887, from the time that the first edition of the Japanese Pharmacopoeia came out, use of the metric system began to spread in Japan. Its use spread gradually for use in the package size of pharmaceutical products. At the time, the number of pharmaceutical units (i.e., tablets), became the sales unit, which is easy to understand by the purchaser.
Measuring sperm backflow following female orgasm: a new method
King, Robert; Dempsey, Maria; Valentine, Katherine A.
2016-01-01
Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082
Fish mercury distribution in Massachusetts, USA lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, J.; Hutcheson, M.S.; West, C.R.
1999-07-01
The sediment, water, and three species of fish from 24 of Massachusetts' (relatively) least-impacted water bodies were sampled to determine the patterns of variation in edible tissue mercury concentrations and the relationships of these patterns to characteristics of the water, sediment, and water bodies (lake, wetland, and watershed areas). Sampling was apportioned among three different ecological subregions and among lakes of differing trophic status. The authors sought to partition the variance to discover if these broadly defined concepts are suitable predictors of mercury levels in fish. Average muscle mercury concentrations were 0.15 mg/kg wet weight in the bottom-feeding brown bullheadsmore » (Ameriurus nebulosus); 0.31 mg/kg in the omnivorous yellow perch (Perca flavescens); and 0.39 mg/kg in the predaceous largemouth bass (Micropterus salmoides). Statistically significant differences in fish mercury concentrations between ecological subregions in Massachusetts, USA, existed only in yellow perch. The productivity level of the lakes (as deduced from Carlson's Trophic Status Index) was not a strong predictor of tissue mercury concentrations in any species. pH was a highly (inversely) correlated environmental variable with yellow perch and brown bullhead tissue mercury. Largemouth bass tissue mercury concentrations were most highly correlated with the weight of the fish (+), lake size (+), and source area sizes (+). Properties of individual lakes appear more important for determining fish tissue mercury concentrations than do small-scale ecoregional differences. Species that show major mercury variation with size or trophic level may not be good choices for use in evaluating the importance of environmental variables.« less
Measuring sperm backflow following female orgasm: a new method.
King, Robert; Dempsey, Maria; Valentine, Katherine A
2016-01-01
Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
An Updated Equilibrium Machine
NASA Astrophysics Data System (ADS)
Schultz, Emeric
2008-08-01
A device that can demonstrate equilibrium, kinetic, and thermodynamic concepts is described. The device consists of a leaf blower attached to a plastic container divided into two chambers by a barrier of variable size and form. Styrofoam balls can be exchanged across the barrier when the leaf blower is turned on and various air pressures are applied. Equilibrium can be approached from different distributions of balls in the container under different conditions. The Le Châtelier principle can be demonstrated. Kinetic concepts can be demonstrated by changing the nature of the barrier, either changing the height or by having various sized holes in the barrier. Thermodynamic concepts can be demonstrated by taping over some or all of the openings and restricting air flow into container on either side of the barrier.
NASA Technical Reports Server (NTRS)
Davis, S. J.; Rosenstein, H.
1975-01-01
The Comprehensive Airship Sizing and Performance Computer Program (CASCOMP) is described which was developed and used in the design and evaluation of advanced lighter-than-air (LTA) craft. The program defines design details such as engine size and number, component weight buildups, required power, and the physical dimensions of airships which are designed to meet specified mission requirements. The program is used in a comparative parametric evaluation of six advanced lighter-than-air concepts. The results indicate that fully buoyant conventional airships have the lightest gross lift required when designed for speeds less than 100 knots and the partially buoyant concepts are superior above 100 knots. When compared on the basis of specific productivity, which is a measure of the direct operating cost, the partially buoyant lifting body/tilting prop-rotor concept is optimum.
Application of active control landing gear technology to the A-10 aircraft
NASA Technical Reports Server (NTRS)
Ross, I.; Edson, R.
1983-01-01
Two concepts which reduce the A-10 aircraft's wing/gear interface forces as a result of applying active control technology to the main landing gear are described. In the first concept, referred to as the alternate concept a servovalve in a closed pressure control loop configuration effectively varies the size of the third stage spool valve orifice which is embedded in the strut. This action allows the internal energy in the strut to shunt hydraulic flow around the metering orifice. The command signal to the loop is reference strut pressure which is compared to the measured strut pressure, the difference being the loop error. Thus, the loop effectively varies the spool valve orifice size to maintain the strut pressure, and therefore minimizes the wing/gear interface force referenced.
Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples
Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry
2015-01-01
With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method – non-destructive 3D X-ray micro-Computed Tomography (μCT) – to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations – in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner. PMID:26549935
Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples.
Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry
2015-03-01
With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.
NASA Astrophysics Data System (ADS)
Voorn, Maarten; Barnhoorn, Auke; Exner, Ulrike; Baud, Patrick; Reuschlé, Thierry
2015-04-01
Fractured reservoir rocks make up an important part of the hydrocarbon reservoirs worldwide. A detailed analysis of fractures and fracture networks in reservoir rock samples is thus essential to determine the potential of these fractured reservoirs. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this study, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna Basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. 3D μCT data is used to extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. The 3D analyses are complemented with thin sections made to provide some 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) of the µCT results towards more realistic reservoir conditions. Our results show that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.
HEKATE-A novel grazing incidence neutron scattering concept for the European Spallation Source.
Glavic, Artur; Stahn, Jochen
2018-03-01
Structure and magnetism at surfaces and buried interfaces on the nanoscale can only be accessed by few techniques, one of which is grazing incidence neutron scattering. While the technique has its strongest limitation in a low signal and large background, due to the low scattering probability and need for high resolution, it can be expected that the high intensity of the European Spallation Source in Lund, Sweden, will make many more such studies possible, warranting a dedicated beamline for this technique. We present an instrument concept, Highly Extended K range And Tunable Experiment (HEKATE), for surface scattering that combines the advantages of two Selene neutron guides with unique capabilities of spatially separated distinct wavelength frames. With this combination, it is not only possible to measure large specular reflectometry ranges, even on free liquid surfaces, but also to use two independent incident beams with tunable sizes and resolutions that can be optimized for the specifics of the investigated samples. Further the instrument guide geometry is tuned for reduction of high energy particle background and only uses low to moderate supermirror coatings for high reliability and affordable cost.
HEKATE—A novel grazing incidence neutron scattering concept for the European Spallation Source
NASA Astrophysics Data System (ADS)
Glavic, Artur; Stahn, Jochen
2018-03-01
Structure and magnetism at surfaces and buried interfaces on the nanoscale can only be accessed by few techniques, one of which is grazing incidence neutron scattering. While the technique has its strongest limitation in a low signal and large background, due to the low scattering probability and need for high resolution, it can be expected that the high intensity of the European Spallation Source in Lund, Sweden, will make many more such studies possible, warranting a dedicated beamline for this technique. We present an instrument concept, Highly Extended K range And Tunable Experiment (HEKATE), for surface scattering that combines the advantages of two Selene neutron guides with unique capabilities of spatially separated distinct wavelength frames. With this combination, it is not only possible to measure large specular reflectometry ranges, even on free liquid surfaces, but also to use two independent incident beams with tunable sizes and resolutions that can be optimized for the specifics of the investigated samples. Further the instrument guide geometry is tuned for reduction of high energy particle background and only uses low to moderate supermirror coatings for high reliability and affordable cost.
Sample Acquisition and Caching architecture for the Mars Sample Return mission
NASA Astrophysics Data System (ADS)
Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.
This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.
Nondestructive Analysis of Astromaterials by Micro-CT and Micro-XRF Analysis for PET Examination
NASA Technical Reports Server (NTRS)
Zeigler, R. A.; Righter, K.; Allen, C. C.
2013-01-01
An integral part of any sample return mission is the initial description and classification of returned samples by the preliminary examination team (PET). The goal of the PET is to characterize and classify returned samples and make this information available to the larger research community who then conduct more in-depth studies on the samples. The PET tries to minimize the impact their work has on the sample suite, which has in the past limited the PET work to largely visual, nonquantitative measurements (e.g., optical microscopy). More modern techniques can also be utilized by a PET to nondestructively characterize astromaterials in much more rigorous way. Here we discuss our recent investigations into the applications of micro-CT and micro-XRF analyses with Apollo samples and ANSMET meteorites and assess the usefulness of these techniques in future PET. Results: The application of micro computerized tomography (micro-CT) to astromaterials is not a new concept. The technique involves scanning samples with high-energy x-rays and constructing 3-dimensional images of the density of materials within the sample. The technique can routinely measure large samples (up to approx. 2700 cu cm) with a small individual voxel size (approx. 30 cu m), and has the sensitivity to distinguish the major rock forming minerals and identify clast populations within brecciated samples. We have recently run a test sample of a terrestrial breccia with a carbonate matrix and multiple igneous clast lithologies. The test results are promising and we will soon analyze a approx. 600 g piece of Apollo sample 14321 to map out the clast population within the sample. Benchtop micro x-ray fluorescence (micro-XRF) instruments can rapidly scan large areas (approx. 100 sq cm) with a small pixel size (approx. 25 microns) and measure the (semi) quantitative composition of largely unprepared surfaces for all elements between Be and U, often with sensitivity on the order of a approx. 100 ppm. Our recent testing of meteorite and Apollo samples on micro-XRF instruments has shown that they can easily detect small zircons and phosphates (approx. 10 m), distinguish different clast lithologies within breccias, and identify different lithologies within small rock fragments (2-4 mm soil Apollo soil fragments).
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
A model of litter size distribution in cattle.
Bennett, G L; Echternkamp, S E; Gregory, K E
1998-07-01
Genetic increases in twinning of cattle could result in increased frequency of triplet or higher-order births. There are no estimates of the incidence of triplets in populations with genetic levels of twinning over 40% because these populations either have not existed or have not been documented. A model of the distribution of litter size in cattle is proposed. Empirical estimates of ovulation rate distribution in sheep were combined with biological hypotheses about the fate of embryos in cattle. Two phases of embryo loss were hypothesized. The first phase is considered to be preimplantation. Losses in this phase occur independently (i.e., the loss of one embryo does not affect the loss of the remaining embryos). The second phase occurs after implantation. The loss of one embryo in this stage results in the loss of all embryos. Fewer than 5% triplet births are predicted when 50% of births are twins and triplets. Above 60% multiple births, increased triplets accounted for most of the increase in litter size. Predictions were compared with data from 5,142 calvings by 14 groups of heifers and cows with average litter sizes ranging from 1.14 to 1.36 calves. The predicted number of triplets was not significantly different (chi2 = 16.85, df = 14) from the observed number. The model also predicted differences in conception rates. A cow ovulating two ova was predicted to have the highest conception rate in a single breeding cycle. As mean ovulation rate increased, predicted conception to one breeding cycle increased. Conception to two or three breeding cycles decreased as mean ovulation increased because late-pregnancy failures increased. An alternative model of the fate of ova in cattle based on embryo and uterine competency predicts very similar proportions of singles, twins, and triplets but different conception rates. The proposed model of litter size distribution in cattle accurately predicts the proportion of triplets found in cattle with genetically high twinning rates. This model can be used in projecting efficiency changes resulting from genetically increasing the twinning rate in cattle.
Not-So-Bright Bulbs Artist Concept
2008-12-10
This artist concept based on data from NASA Spitzer shows the dimmest star-like bodies currently known -- twin brown dwarfs referred to as 2M 0939. The twins, which are about the same size, are drawn as if they were viewed close to one of the bodies.
Attainment of Selected Earth Science Concepts by Texas High School Seniors.
ERIC Educational Resources Information Center
Rollins, Mavis M.; And Others
1983-01-01
Attainment of five earth science concepts by high school seniors depended on the amount of previous science coursework by the students and on the size of their school's enrollment. Seniors in Texas high schools were subjects of the study. (Author/PP)
NASA Technical Reports Server (NTRS)
Lieblein, S.; Gaugeon, M.; Thomas, G.; Zueck, M.
1982-01-01
As part of a program to reduce wind turbine costs, an evaluation was conducted of a laminated wood composite blade for the Mod-OA 200 kW wind turbine. The effort included the design and fabrication concept for the blade, together with cost and load analyses. The blade structure is composed of laminated Douglas fir veneers for the primary spar and nose sections, and honeycomb cored plywood panels for the trailing edges sections. The attachment of the wood blade to the rotor hub was through load takeoff studs bonded into the blade root. Tests were conducted on specimens of the key structural components to verify the feasibility of the concept. It is concluded that the proposed wood composite blade design and fabrication concept is suitable for Mod-OA size turbines (125-ft diameter rotor) at a cost that is very competitive with other methods of manufacture.
Wide operating window spin-torque majority gate towards large-scale integration of logic circuits
NASA Astrophysics Data System (ADS)
Vaysset, Adrien; Zografos, Odysseas; Manfrini, Mauricio; Mocuta, Dan; Radu, Iuliana P.
2018-05-01
Spin Torque Majority Gate (STMG) is a logic concept that inherits the non-volatility and the compact size of MRAM devices. In the original STMG design, the operating range was restricted to very small size and anisotropy, due to the exchange-driven character of domain expansion. Here, we propose an improved STMG concept where the domain wall is driven with current. Thus, input switching and domain wall propagation are decoupled, leading to higher energy efficiency and allowing greater technological optimization. To ensure majority operation, pinning sites are introduced. We observe through micromagnetic simulations that the new structure works for all input combinations, regardless of the initial state. Contrary to the original concept, the working condition is only given by threshold and depinning currents. Moreover, cascading is now possible over long distances and fan-out is demonstrated. Therefore, this improved STMG concept is ready to build complete Boolean circuits in absence of external magnetic fields.
Yang, X. M.; Drury, C. F.; Reynolds, W. D.; Yang, J. Y.
2016-01-01
We test the common assumption that organic carbon (OC) storage occurs on sand-sized soil particles only after the OC storage capacity on silt- and clay-sized particles is saturated. Soil samples from a Brookston clay loam in Southwestern Ontario were analysed for the OC concentrations in bulk soil, and on the clay (<2 μm), silt (2–53 μm) and sand (53–2000 μm) particle size fractions. The OC concentrations in bulk soil ranged from 4.7 to 70.8 g C kg−1 soil. The OC concentrations on all three particle size fractions were significantly related to the OC concentration of bulk soil. However, OC concentration increased slowly toward an apparent maximum on silt and clay, but this maximum was far greater than the maximum predicted by established C sequestration models. In addition, significant increases in OC associated with sand occurred when the bulk soil OC concentration exceeded 30 g C kg−1, but this increase occurred when the OC concentration on silt + clay was still far below the predicted storage capacity for silt and clay fractions. Since the OC concentrations in all fractions of Brookston clay loam soil continued to increase with increasing C (bulk soil OC content) input, we concluded that the concept of OC storage capacity requires further investigation. PMID:27251365
NASA Astrophysics Data System (ADS)
Yang, X. M.; Drury, C. F.; Reynolds, W. D.; Yang, J. Y.
2016-06-01
We test the common assumption that organic carbon (OC) storage occurs on sand-sized soil particles only after the OC storage capacity on silt- and clay-sized particles is saturated. Soil samples from a Brookston clay loam in Southwestern Ontario were analysed for the OC concentrations in bulk soil, and on the clay (<2 μm), silt (2-53 μm) and sand (53-2000 μm) particle size fractions. The OC concentrations in bulk soil ranged from 4.7 to 70.8 g C kg-1 soil. The OC concentrations on all three particle size fractions were significantly related to the OC concentration of bulk soil. However, OC concentration increased slowly toward an apparent maximum on silt and clay, but this maximum was far greater than the maximum predicted by established C sequestration models. In addition, significant increases in OC associated with sand occurred when the bulk soil OC concentration exceeded 30 g C kg-1, but this increase occurred when the OC concentration on silt + clay was still far below the predicted storage capacity for silt and clay fractions. Since the OC concentrations in all fractions of Brookston clay loam soil continued to increase with increasing C (bulk soil OC content) input, we concluded that the concept of OC storage capacity requires further investigation.
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
NASA Astrophysics Data System (ADS)
Shahrestani, Shahed; Mokhtari, Ahmad Reza
2017-04-01
Stream sediment sampling is a well-known technique used to discover the geochemical anomalies in regional exploration activities. In an upstream catchment basin of stream sediment sample, the geochemical signals originating from probable mineralization could be diluted due to mixing with the weathering material coming from the non-anomalous sources. Hawkes's equation (1976) was an attempt to overcome the problem in which the area size of catchment basin was used to remove dilution from geochemical anomalies. However, the metal content of a stream sediment sample could be linked to several geomorphological, sedimentological, climatic and geological factors. The area size is not itself a comprehensive representative of dilution taking place in a catchment basin. The aim of the present study was to consider a number of geomorphological factors affecting the sediment supply, transportation processes, storage and in general, the geochemistry of stream sediments and their incorporation in the dilution correction procedure. This was organized through employing the concept of sediment yield and sediment delivery ratio and linking such characteristics to the dilution phenomenon in a catchment basin. Main stream slope (MSS), relief ratio (RR) and area size (Aa) of catchment basin were selected as the important proxies (PSDRa) for sediment delivery ratio and then entered to the Hawkes's equation. Then, Hawkes's and new equations were applied on the stream sediment dataset collected from Takhte-Soleyman district, west of Iran for Au, As and Sb values. A number of large and small gold, antimony and arsenic mineral occurrences were used to evaluate the results. Anomaly maps based on the new equations displayed improvement in anomaly delineation taking the spatial distribution of mineral deposits into account and could present new catchment basins containing known mineralization as the anomaly class, especially in the case of Au and As. Four catchment basins having Au and As mineralization were added to anomaly class and also one catchment basin with known As occurrence was highlighted as anomalous using new approach. The results demonstrated the usefulness of considering geomorphological parameters in dealing with dilution phenomenon in a catchment basin.
Brief Report: Evidence for Normative Resting-State Physiology in Autism
ERIC Educational Resources Information Center
Nuske, Heather J.; Vivanti, Giacomo; Dissanayake, Cheryl
2014-01-01
Although the conception of autism as a disorder of abnormal resting-state physiology has a long history, the evidence remains mixed. Using state-of-the-art eye-tracking pupillometry, resting-state (tonic) pupil size was measured in children with and without autism. No group differences in tonic pupil size were found, and tonic pupil size was not…
ERIC Educational Resources Information Center
Noll, Jennifer; Hancock, Stacey
2015-01-01
This research investigates what students' use of statistical language can tell us about their conceptions of distribution and sampling in relation to informal inference. Prior research documents students' challenges in understanding ideas of distribution and sampling as tools for making informal statistical inferences. We know that these…
NASA Technical Reports Server (NTRS)
Tucker, Michael; Meredith, Oliver; Brothers, Bobby
1986-01-01
Several concepts of chemical-propulsion Space Vehicles (SVs) for manned Mars landing missions are presented. For vehicle sizing purposes, several specific missions were chosen from opportunities in the late 1990's and early 2000's, and a vehicle system concept is then described which is applicable to the full range of missions and opportunities available. In general, missions utilizing planetary opposition alignments can be done with smaller vehicles than those utilizing planetary opposition alignments. The conjunction missions have a total mission time of about 3 years, including a required stay-time of about 60 days. Both types of missions might be desirable during a Mars program, the opposition type for early low-risk missions and/or for later unmanned cargo missions, and the conjunction type for more extensive science/exploration missions and/or for Mars base activities. Since the opposition missions appeared to drive the SV size more severely, there were probably more cases examined for them. Some of the concepts presented utilize all-propulsive braking, some utilize and all aerobraking approach, and some are hybrids. Weight statements are provided for various cases. Most of the work was done on 0-g vehicle concepts, but partial-g and 1-g concepts are also provided and discussed. Several options for habitable elements are shown, such as large-diameter modules and space station (SS) types of modules.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Micromega IR, an infrared hyperspectral microscope for space exploration
NASA Astrophysics Data System (ADS)
Pilorget, C.; Bibring, J.-P.; Berthe, M.; Hamm, V.
2017-11-01
The coupling between imaging and spectrometry has proved to be one of the most promising way to study remotely planetary objects [1][2]. The next step is to use this concept for in situ analyses. MicrOmega IR has been developed within this scope. It is an ultra miniaturized near-infrared hyperspectral microscope dedicated to in situ analyses, selected to be part of the ESA/ExoMars rover and RKA/Phobos Grunt lander payload. The goal of this instrument is to characterize the composition of samples at almost their grain size scale, in a nondestructive way. Coupled to the mapping information, it provides unique clues to trace back the history of the parent body (planet, satellite or small body) [3][4].
Multilevel Modeling in Psychosomatic Medicine Research
Myers, Nicholas D.; Brincks, Ahnalee M.; Ames, Allison J.; Prado, Guillermo J.; Penedo, Frank J.; Benedict, Catherine
2012-01-01
The primary purpose of this manuscript is to provide an overview of multilevel modeling for Psychosomatic Medicine readers and contributors. The manuscript begins with a general introduction to multilevel modeling. Multilevel regression modeling at two-levels is emphasized because of its prevalence in psychosomatic medicine research. Simulated datasets based on some core ideas from the Familias Unidas effectiveness study are used to illustrate key concepts including: communication of model specification, parameter interpretation, sample size and power, and missing data. Input and key output files from Mplus and SAS are provided. A cluster randomized trial with repeated measures (i.e., three-level regression model) is then briefly presented with simulated data based on some core ideas from a cognitive behavioral stress management intervention in prostate cancer. PMID:23107843
Damage Detection in Rotorcraft Composite Structures Using Thermography and Laser-Based Ultrasound
NASA Technical Reports Server (NTRS)
Anastasi, Robert F.; Zalameda, Joseph N.; Madaras, Eric I.
2004-01-01
New rotorcraft structural composite designs incorporate lower structural weight, reduced manufacturing complexity, and improved threat protection. These new structural concepts require nondestructive evaluation inspection technologies that can potentially be field-portable and able to inspect complex geometries for damage or structural defects. Two candidate technologies were considered: Thermography and Laser-Based Ultrasound (Laser UT). Thermography and Laser UT have the advantage of being non-contact inspection methods, with Thermography being a full-field imaging method and Laser UT a point scanning technique. These techniques were used to inspect composite samples that contained both embedded flaws and impact damage of various size and shape. Results showed that the inspection techniques were able to detect both embedded and impact damage with varying degrees of success.
NASA Technical Reports Server (NTRS)
Liu, Dahai; Goodrich, Ken; Peak, Bob
2006-01-01
This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.
Community Resilience of Civilians at War: A New Perspective.
Eshel, Yohanan; Kimhi, Shaul
2016-01-01
A new concept of community resilience pertaining to the community's post adversity strength to vulnerability ratio was associated with five determinants: individual resilience, national resilience, well-being, community size, and sense of coherence. The data was collected four months after Israel's war in the Gaza Strip in 2014. Participants were 251 adult civilians living in southern Israel who have recently been threatened by massive missile attacks, and 259 adults living in northern Israel, which has not been under missile fire recently. The investigated variables predicted community resilience, and their effects were mediated by sense of coherence. Results which were similar for both samples were discussed in terms of the nature of resilience and in terms of proximal and distal exposure to war.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Kyle, Greg J; Nissen, Lisa; Tett, Susan
2008-12-01
Prescription medicine samples provided by pharmaceutical companies are predominantly newer and more expensive products. The range of samples provided to practices may not represent the drugs that the doctors desire to have available. Few studies have used a qualitative design to explore the reasons behind sample use. The aim of this study was to explore the opinions of a variety of Australian key informants about prescription medicine samples, using a qualitative methodology. Twenty-three organizations involved in quality use of medicines in Australia were identified, based on the authors' previous knowledge. Each organization was invited to nominate 1 or 2 representatives to participate in semistructured interviews utilizing seeding questions. Each interview was recorded and transcribed verbatim. Leximancer v2.25 text analysis software (Leximancer Pty Ltd., Jindalee, Queensland, Australia) was used for textual analysis. The top 10 concepts from each analysis group were interrogated back to the original transcript text to determine the main emergent opinions. A total of 18 key interviewees representing 16 organizations participated. Samples, patient, doctor, and medicines were the major concepts among general opinions about samples. The concept drug became more frequent and the concept companies appeared when marketing issues were discussed. The Australian Pharmaceutical Benefits Scheme and cost were more prevalent in discussions about alternative sample distribution models, indicating interviewees were cognizant of budgetary implications. Key interviewee opinions added richness to the single-word concepts extracted by Leximancer. Participants recognized that prescription medicine samples have an influence on quality use of medicines and play a role in the marketing of medicines. They also believed that alternative distribution systems for samples could provide benefits. The cost of a noncommercial system for distributing samples or starter packs was a concern. These data will be used to design further research investigating alternative models for distribution of samples.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Parandavar, Nehleh; Rahmanian, Afifeh; Badiyepeymaie Jahromi, Zohreh
2015-07-31
Commitment to ethics usually results in nurses' better professional performance and advancement. Professional self-concept of nurses refers to their information and beliefs about their roles, values, and behaviors. The objective of this study is to analyze the relationship between nurses' professional self-concept and professional ethics in hospitals affiliated to Jahrom University of Medical Sciences. This cross sectional-analytical study was conducted in 2014. The 270 participants were practicing nurses and head-nurses at the teaching hospitals of Peimanieh and Motahari in Jahrom University of Medical Science. Sampling was based on sencus method. Data was collected using Cowin's Nurses' self-concept questionnaire (NSCQ) and the researcher-made questionnaire of professional ethics. The average of the sample's professional self-concept score was 6.48±0.03 out of 8. The average of the sample's commitment to professional ethics score was 4.08±0.08 out of 5. Based on Pearson's correlation test, there is a significant relationship between professional ethics and professional self-concept (P=0.01, r=0.16). In view of the correlation between professional self-concept and professional ethics, it is recommended that nurses' self-concept, which can boost their commitment to ethics, be given more consideration.
Parandavar, Nehleh; Rahmanian, Afifeh; Jahromi, Zohreh Badiyepeymaie
2016-01-01
Background: Commitment to ethics usually results in nurses’ better professional performance and advancement. Professional self-concept of nurses refers to their information and beliefs about their roles, values, and behaviors. The objective of this study is to analyze the relationship between nurses’ professional self-concept and professional ethics in hospitals affiliated to Jahrom University of Medical Sciences. Methods: This cross sectional-analytical study was conducted in 2014. The 270 participants were practicing nurses and head-nurses at the teaching hospitals of Peimanieh and Motahari in Jahrom University of Medical Science. Sampling was based on sencus method. Data was collected using Cowin's Nurses’ self-concept questionnaire (NSCQ) and the researcher-made questionnaire of professional ethics. Results: The average of the sample's professional self-concept score was 6.48±0.03 out of 8. The average of the sample's commitment to professional ethics score was 4.08±0.08 out of 5. Based on Pearson's correlation test, there is a significant relationship between professional ethics and professional self-concept (P=0.01, r=0.16). Conclusion: In view of the correlation between professional self-concept and professional ethics, it is recommended that nurses’ self-concept, which can boost their commitment to ethics, be given more consideration. PMID:26573035
Apparatus for Sizing and Rewinding Graphite Fibers
NASA Technical Reports Server (NTRS)
Wilson, M. L.; Stanfield, C. E.
1986-01-01
Equipment ideally suited for research and development of new sizing solutions. Designed expecially for applying thermoplastic sizing solutions to graphite tow consisting of 3,000 to 12,000 filaments per tow, but accommodates other solutions, filament counts, and materials other than graphite. Closed system containing highly volatile methylene chloride vapors. Also ventilation system directly over resin reservoir. Concept used to apply sizing compounds on fiber tows or yarn-type reinforcement materials used in composite technology. Sizing solutions consist of compounds compatible with thermosets as well as thermoplastics.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Perceptions of self-concept and self-presentation by procrastinators: further evidence.
Ferrari, Joseph R; Díaz-Morales, Juan Francisco
2007-05-01
Two samples of university students completed self-report measures of chronic procrastination and either self-concept variables (Sample 1, n = 233) or self-presentational styles (Sample 2, n = 210). Results indicated that procrastination was significantly related to a self-concept of oneself as dominated by issues related to task performance, and to self-presentation strategies that reflected a person as continually justifying and excusing task delays and being "needy" of others' approval. It seems that men and women procrastinate in order to improve their social standing by making their accomplishments seem greater than they really are.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Remote Sensing of Aerosol using MODIS, MODIS+CALIPSO and with the AEROSAT Concept
NASA Technical Reports Server (NTRS)
Kaufman, Yoram J.
2002-01-01
In the talk I shall review the MODIS use of spectral information to derive aerosol size distribution, optical thickness and reflected spectral flux. The accuracy and validation of the MODIS products will be discussed. A few applications will be shown: inversion of combined MODIS+lidar data, aerosol Anthropogenic direct forcing, and dust deposition in the Atlantic Ocean. I shall also discuss the aerosol information that MODIS is measuring: real ref index, single scattering albedo, size of fine and coarse modes, and describe the AEROSAT concept that uses bright desert and glint to derive aerosol absorption.
Weights assessment for orbit-on-demand vehicles
NASA Technical Reports Server (NTRS)
Macconochie, I. O.; Martin, J. A.; Breiner, C. A.; Cerro, J. A.
1985-01-01
Future manned, reusable earth-to-orbit vehicles may be required to reach orbit within hours or even minutes of a mission decision. A study has been conducted to consider vehicles with such a capability. In the initial phase of the study, 11 vehicles were sized for deployment of 5000 lbs to a polar orbit. From this matrix, two of the most promising concepts were resized for a modified mission and payload. A key feature of the study was the use of consistent mass estimating techniques for a broad range of concepts, allowing direct comparisons of sizes and weights.
NASA Astrophysics Data System (ADS)
Viennet, D.; Fournier, M.; Copard, Y.; Dupont, J. P.
2017-12-01
Source to sink is one of the main concepts in Earth Sciences for a better knowledge of hydrosystems dynamics. Regarding this issue, the present day challenge consists in the characterization by in-situ measurements of the nature and the origin of suspended particles matters (SPM). Few methods can fully cover such requirements and among them, the methodology using the form of particles deserves to be developed. Indeed, morphometry of particles is widely used in sedimentology to identify different sedimentary stocks, source-to-sink transport and sedimentation mechanisms. Currently, morphometry analyses are carried out by scanning electron microscope coupled to image analysis to measure various size and shape descriptors on particles like flatness, elongation, circularity, sphericity, bluntness, fractal dimension. However, complexity and time of analysis are the main limitations of this technique for a long-term monitoring of SPM transfers. Here we present an experimental morphometric approach using a morphogranulometer (a CCD camera coupled to a peristaltic pump). The camera takes pictures while the sample is circulating through a flow cell, leading to the analysis of numerous particles in a short time. The image analysis provides size and shape information discriminating various particles stocks according to their nature and origin by statistical analyses. Measurements were carried out on standard samples of particles commonly found in natural waters. The size and morphological distributions of the different mineral fractions (clay, sand, oxides etc), biologic (microalgae, pollen, etc) and organic (peat, coal, soil organic matter, etc) samples are statistically independent and can be discriminated on a 4D graph. Next step will be on field in situ measurements in a sink-spring network to understand the transfers of the particles stocks inside this simple karstic network. Such a development would be promising for the characterisation of natural hydrosystems.
NASA Astrophysics Data System (ADS)
Pilorget, C.; Bibring, J. P.; Berthe, M.
2011-10-01
The coupling between imaging and spectrometry has proved to be one of the most promising way to study remotely planetary objects [1][2]. The next step is to use this concept for in situ analyses. MicrOmega IR has been developed within this scope in the framework of the Exomars mission (Pasteur paylaod). It is an ultra miniaturized nearinfrared hyperspectral microscope dedicated to in situ analyses, with the goal to characterize the composition of Mars soil at almost its grain size scale, in a non destructive way. It will provide unique clues to trace back the history of Mars, and will contribute to assess Mars past and present astrobiological potential by detecting possible organic compounds within the samples. Results obtained on ground both on a representative breadboard of the instrument and with a demonstrator developed in the scope of the Phobos Grunt mission will be presented during the conference to demonstrate the instrument capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
Roy, Gilles; Roy, Nathalie
2008-03-20
A multiple-field-of-view (MFOV) lidar is used to characterize size and optical depth of low concentration of bioaerosol clouds. The concept relies on the measurement of the forward scattered light by using the background aerosols at various distances at the back of a subvisible cloud. It also relies on the subtraction of the background aerosol forward scattering contribution and on the partial attenuation of the first-order backscattering. The validity of the concept developed to retrieve the effective diameter and the optical depth of low concentration bioaerosol clouds with good precision is demonstrated using simulation results and experimental MFOV lidar measurements. Calculations are also done to show that the method presented can be extended to small optical depth cloud retrieval.
Space Spider - A concept for fabrication of large structures
NASA Technical Reports Server (NTRS)
Britton, W. R.; Johnston, J. D.
1978-01-01
The Space Spider concept for the automated fabrication of large space structures involves a specialized machine which roll-forms thin gauge material such as aluminum and develops continuous spiral structures with radial struts to sizes of 600-1,000 feet in diameter by 15 feet deep. This concept allows the machine and raw material to be integrated using the Orbiter capabilities, then boosting the rigid system to geosynchronous equatorial orbit (GEO) without high sensitivity to acceleration forces. As a teleoperator controlled device having repetitive operations, the fabrication process can be monitored and verified from a ground-based station without astronaut involvement in GEO. The resultant structure will be useful as an intermediate size platform or as a structural element to be used with other elements such as the space-fabricated beams or composite nested tubes.
NASA Technical Reports Server (NTRS)
1969-01-01
A tilt-proprotor proof-of-concept aircraft design study has been conducted. The results are presented. The ojective of the contract is to advance the state of proprotor technology through design studies and full-scale wind-tunnel tests. The specific objective is to conduct preliminary design studies to define a minimum-size tilt-proprotor research aircraft that can perform proof-of-concept flight research. The aircraft that results from these studies is a twin-engine, high-wing aircraft with 25-foot, three-bladed tilt proprotors mounted on pylons at the wingtips. Each pylon houses a Pratt and Whitney PT6C-40 engine with a takeoff rating of 1150 horsepower. Empty weight is estimated at 6876 pounds. The normal gross weight is 9500 pounds, and the maximum gross weight is 12,400 pounds.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Teaching Aerobic Fitness Concepts.
ERIC Educational Resources Information Center
Sander, Allan N.; Ratliffe, Tom
2002-01-01
Discusses how to teach aerobic fitness concepts to elementary students. Some of the K-2 activities include location, size, and purpose of the heart and lungs; the exercise pulse; respiration rate; and activities to measure aerobic endurance. Some of the 3-6 activities include: definition of aerobic endurance; heart disease risk factors;…
Space transfer concepts and analyses for exploration missions: Technical directive 10
NASA Technical Reports Server (NTRS)
Woodcock, Gordon R.
1992-01-01
The current technical effort is part of the third phase of a broad-scoped and systematic study of space transfer concepts for human lunar and Mars missions. The study addressed issues that were raised during the previous phases but specifically on launch vehicle size trades and MEV options.
Preliminary geologic map of the Thaniyat Turayf Quadrangle, sheet 29C, Kingdom of Saudi Arabia
Meissner, C.R.; Dini, S.M.; Farasani, A.M.; Riddler, G.P.; Smith, G.H.; Griffin, M.B.; Van Eck, Marcel
1990-01-01
A new structural concept introduced in this report extends the Wadi as Sirhan graben complex southeastward into the An Nafud. This concept increases the size of the potentially oil-and-gas-bearing Wadi as Sirhan region to include the An Nafud.
Fan Size and Foil Type in Recognition Memory.
ERIC Educational Resources Information Center
Walls, Richard T.; And Others
An experiment involving 20 graduate and undergraduate students (7 males and 13 females) at West Virginia University (Morgantown) assessed "fan network structures" of recognition memory. A fan in network memory structure occurs when several facts are connected into a single node (concept). The more links from that concept to various…
Agile beam laser radar using computational imaging for robotic perception
NASA Astrophysics Data System (ADS)
Powers, Michael A.; Stann, Barry L.; Giza, Mark M.
2015-05-01
This paper introduces a new concept that applies computational imaging techniques to laser radar for robotic perception. We observe that nearly all contemporary laser radars for robotic (i.e., autonomous) applications use pixel basis scanning where there is a one-to-one correspondence between world coordinates and the measurements directly produced by the instrument. In such systems this is accomplished through beam scanning and/or the imaging properties of focal-plane optics. While these pixel-basis measurements yield point clouds suitable for straightforward human interpretation, the purpose of robotic perception is the extraction of meaningful features from a scene, making human interpretability and its attendant constraints mostly unnecessary. The imposing size, weight, power and cost of contemporary systems is problematic, and relief from factors that increase these metrics is important to the practicality of robotic systems. We present a system concept free from pixel basis sampling constraints that promotes efficient and adaptable sensing modes. The cornerstone of our approach is agile and arbitrary beam formation that, when combined with a generalized mathematical framework for imaging, is suited to the particular challenges and opportunities of robotic perception systems. Our hardware concept looks toward future systems with optical device technology closely resembling modern electronically-scanned-array radar that may be years away from practicality. We present the design concept and results from a prototype system constructed and tested in a laboratory environment using a combination of developed hardware and surrogate devices for beam formation. The technological status and prognosis for key components in the system is discussed.
A stand density management diagram for sawtimber-sized mixed upland central hardwoods
J.A., Jr. Kershaw; B.C. Fischer
1991-01-01
Data from 190 CFI plots located in southern and west-central Indiana are used to develop a stand density diagram for sawtimber-sized mixed upland hardwoods in the Central States. The stand density diagram utilizes the concepts of self-thinning to establish a maximum size-density curve, and the stocking standards of Gingrich (1967) to formulate imtermediate stocking...
In the early 1970s, it was understood that combustion particles were formed mostly in sizes below 1 um diameter, and windblown dust was suspended in sizes mostly above 1 um diameter. However, particle size distribution was thought of as a single mode. Particles were thought to f...
Extracting quantitative measures from EAP: a small clinical study using BFOR.
Hosseinbor, A Pasha; Chung, Moo K; Wu, Yu-Chien; Fleming, John O; Field, Aaron S; Alexander, Andrew L
2012-01-01
The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents, and hence providing rich information about complex tissue microstructure properties. Bessel Fourier orientation reconstruction (BFOR) is one of several analytical, non-Cartesian EAP reconstruction schemes employing multiple shell acquisitions that have recently been proposed. Such modeling bases have not yet been fully exploited in the extraction of rotationally invariant q-space indices that describe the degree of diffusion anisotropy/restrictivity. Such quantitative measures include the zero-displacement probability (P(o)), mean squared displacement (MSD), q-space inverse variance (QIV), and generalized fractional anisotropy (GFA), and all are simply scalar features of the EAP. In this study, a general relationship between MSD and q-space diffusion signal is derived and an EAP-based definition of GFA is introduced. A significant part of the paper is dedicated to utilizing BFOR in a clinical dataset, comprised of 5 multiple sclerosis (MS) patients and 4 healthy controls, to estimate P(o), MSD, QIV, and GFA of corpus callosum, and specifically, to see if such indices can detect changes between normal appearing white matter (NAWM) and healthy white matter (WM). Although the sample size is small, this study is a proof of concept that can be extended to larger sample sizes in the future.
ERIC Educational Resources Information Center
Brown, David S.
2002-01-01
Recommends the use of concept mapping in science teaching and proposes that it be presented as a creative activity. Includes a sample lesson plan of a potato stamp concept mapping activity for astronomy. (DDR)
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Liu, Ning; Chen, Yiting; Yang, Xiangdong; Hu, Yi
2017-01-01
Different family compositions and sizes may affect child development through the different modes of interaction between family members. Previous studies have compared only children with non-only children in cognitive/non-cognitive outcomes. However, relatively little research has systematically investigated the potential moderators among them. Using a large and representative sample of Chinese students (Grades 7–8; N = 5,752), this study examines the roles of demographic characteristics, such as gender, region, parental educational level, parental expectations, family socio-economic status and family structure, in the associations between only child status and cognitive/non-cognitive outcomes. For the cognitive outcomes, only child status exerts an influence on the students' academic performance in Chinese and mathematics in the sample of three districts' students. The examined associations between only child status and cognitive outcomes are different in region, parental education, parental expectations and family structure, while gender and family socio-economic status did not. For the non-cognitive outcomes, only child status exerts an influence on the students' school well-being, academic self-efficacy, academic self-concept, and internal academic motivation in the full sample of students, but not on external academic motivation. Further, the examined associations between only child status and non-cognitive outcomes are different in region, parental education, family socio-economic status and family structure, while gender and parental expectations did not. These findings suggest that the associations between only child status and cognitive/non-cognitive outcomes are heterogeneous in terms of some of the demographic characteristics. Possible explanations are proposed in some concepts of region and family environment in China. PMID:28421006
Liu, Ning; Chen, Yiting; Yang, Xiangdong; Hu, Yi
2017-01-01
Different family compositions and sizes may affect child development through the different modes of interaction between family members. Previous studies have compared only children with non-only children in cognitive/non-cognitive outcomes. However, relatively little research has systematically investigated the potential moderators among them. Using a large and representative sample of Chinese students (Grades 7-8; N = 5,752), this study examines the roles of demographic characteristics, such as gender, region, parental educational level, parental expectations, family socio-economic status and family structure, in the associations between only child status and cognitive/non-cognitive outcomes. For the cognitive outcomes, only child status exerts an influence on the students' academic performance in Chinese and mathematics in the sample of three districts' students. The examined associations between only child status and cognitive outcomes are different in region, parental education, parental expectations and family structure, while gender and family socio-economic status did not. For the non-cognitive outcomes, only child status exerts an influence on the students' school well-being, academic self-efficacy, academic self-concept, and internal academic motivation in the full sample of students, but not on external academic motivation. Further, the examined associations between only child status and non-cognitive outcomes are different in region, parental education, family socio-economic status and family structure, while gender and parental expectations did not. These findings suggest that the associations between only child status and cognitive/non-cognitive outcomes are heterogeneous in terms of some of the demographic characteristics. Possible explanations are proposed in some concepts of region and family environment in China.
Directionality theory and the evolution of body size.
Demetrius, L
2000-12-07
Directionality theory, a dynamic theory of evolution that integrates population genetics with demography, is based on the concept of evolutionary entropy, a measure of the variability in the age of reproducing individuals in a population. The main tenets of the theory are three principles relating the response to the ecological constraints a population experiences, with trends in entropy as the population evolves under mutation and natural selection. (i) Stationary size or fluctuations around a stationary size (bounded growth): a unidirectional increase in entropy; (ii) prolonged episodes of exponential growth (unbounded growth), large population size: a unidirectional decrease in entropy; and (iii) prolonged episodes of exponential growth (unbounded growth), small population size: random, non-directional change in entropy. We invoke these principles, together with an allometric relationship between entropy, and the morphometric variable body size, to provide evolutionary explanations of three empirical patterns pertaining to trends in body size, namely (i) Cope's rule, the tendency towards size increase within phyletic lineages; (ii) the island rule, which pertains to changes in body size that occur as species migrate from mainland populations to colonize island habitats; and (iii) Bergmann's rule, the tendency towards size increase with increasing latitude. The observation that these ecotypic patterns can be explained in terms of the directionality principles for entropy underscores the significance of evolutionary entropy as a unifying concept in forging a link between micro-evolution, the dynamics of gene frequency change, and macro-evolution, dynamic changes in morphometric variables.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Self-esteem, academic self-concept, and aggression at school.
Taylor, Laramie D; Davis-Kean, Pamela; Malanchuk, Oksana
2007-01-01
The present study explores the relation between academic self-concept, self-esteem, and aggression at school. Longitudinal data from a racially diverse sample of middle-school students were analyzed to explore how academic self-concept influenced the likelihood of aggressing at school and whether high self-concept exerted a different pattern of influence when threatened. Data include self-reported academic self-concept, school-reported academic performance, and parent-reported school discipline. Results suggest that, in general, students with low self-concept in achievement domains are more likely to aggress at school than those with high self-concept. However, there is a small sample of youth who, when they receive contradictory information that threatens their reported self-concept, do aggress. Global self-esteem was not found to be predictive of aggression. These results are discussed in the context of recent debates on whether self-esteem is a predictor of aggression and the use of a more proximal vs. general self-measure in examining the self-esteem and aggression relation. Copyright 2006 Wiley-Liss; Inc.
Towards a routine application of Top-Down approaches for label-free discovery workflows.
Schmit, Pierre-Olivier; Vialaret, Jerome; Wessels, Hans J C T; van Gool, Alain J; Lehmann, Sylvain; Gabelle, Audrey; Wood, Jason; Bern, Marshall; Paape, Rainer; Suckau, Detlev; Kruppa, Gary; Hirtz, Christophe
2018-03-20
Thanks to proteomics investigations, our vision of the role of different protein isoforms in the pathophysiology of diseases has largely evolved. The idea that protein biomarkers like tau, amyloid peptides, ApoE, cystatin, or neurogranin are represented in body fluids as single species is obviously over-simplified, as most proteins are present in different isoforms and subjected to numerous processing and post-translational modifications. Measuring the intact mass of proteins by MS has the advantage to provide information on the presence and relative amount of the different proteoforms. Such Top-Down approaches typically require a high degree of sample pre-fractionation to allow the MS system to deliver optimal performance in terms of dynamic range, mass accuracy and resolution. In clinical studies, however, the requirements for pre-analytical robustness and sample size large enough for statistical power restrict the routine use of a high degree of sample pre-fractionation. In this study, we have investigated the capacities of current-generation Ultra-High Resolution Q-Tof systems to deal with high complexity intact protein samples and have evaluated the approach on a cohort of patients suffering from neurodegenerative disease. Statistical analysis has shown that several proteoforms can be used to distinguish Alzheimer disease patients from patients suffering from other neurodegenerative disease. Top-down approaches have an extremely high biological relevance, especially when it comes to biomarker discovery, but the necessary pre-fractionation constraints are not easily compatible with the robustness requirements and the size of clinical sample cohorts. We have demonstrated that intact protein profiling studies could be run on UHR-Q-ToF with limited pre-fractionation. The proteoforms that have been identified as candidate biomarkers in the-proof-of concept study are derived from proteins known to play a role in the pathophysiology process of Alzheimer disease. Copyright © 2017 Elsevier B.V. All rights reserved.
Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R
2016-01-21
The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
A thrust-sheet propulsion concept using fissionable elements
NASA Technical Reports Server (NTRS)
Moeckel, W. E.
1976-01-01
A space propulsion concept is proposed and analyzed which consists of a thin sheet coated on one side with fissionable material, so that nuclear power is converted directly into propulsive power. Thrust is available both from ejected fission fragments and from thermal radiation. Optimum thicknesses are determined for the active and substrate layers. This concept is shown to have potential mission capability (in terms of velocity increments) superior to that of all other advanced propulsion concepts for which performance estimates are available. A suitable spontaneously fissioning material such as Cf254 could provide an extremely high-performance first stage beyond earth orbit. In contrast with some other advanced nuclear propulsion concepts, there is no minimum size below which this concept is infeasible.
A thrust-sheet propulsion concept using fissionable elements
NASA Technical Reports Server (NTRS)
Moeckel, W. E.
1976-01-01
A space propulsion concept is proposed and analyzed which consists of a thin sheet coated on one side with fissionable material, so that nuclear power is converted directly into propulsive power. Thrust is available both from ejected fission fragments and from thermal radiation. Optimum thicknesses are determined for the active and substrate layers. This concept is shown to have potential mission capability (in terms of velocity increments) superior to that of all other advanced propulsion concepts for which performance estimates are available. A suitable spontaneously fissioning material such as Cf-254 could provide an extremely high-performance first stage beyond earth orbit. In contrast with some other advanced nuclear propulsion concepts, there is no minimum size below which this concept is infeasible.
Karayiannis, Nikos Ch.; Kröger, Martin
2009-01-01
We review the methodology, algorithmic implementation and performance characteristics of a hierarchical modeling scheme for the generation, equilibration and topological analysis of polymer systems at various levels of molecular description: from atomistic polyethylene samples to random packings of freely-jointed chains of tangent hard spheres of uniform size. Our analysis focuses on hitherto less discussed algorithmic details of the implementation of both, the Monte Carlo (MC) procedure for the system generation and equilibration, and a postprocessing step, where we identify the underlying topological structure of the simulated systems in the form of primitive paths. In order to demonstrate our arguments, we study how molecular length and packing density (volume fraction) affect the performance of the MC scheme built around chain-connectivity altering moves. In parallel, we quantify the effect of finite system size, of polydispersity, and of the definition of the number of entanglements (and related entanglement molecular weight) on the results about the primitive path network. Along these lines we approve main concepts which had been previously proposed in the literature. PMID:20087477
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.
2016-01-01
The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.
Improving inferences in population studies of rare species that are detected imperfectly
MacKenzie, D.I.; Nichols, J.D.; Sutton, N.; Kawanishi, K.; Bailey, L.L.
2005-01-01
For the vast majority of cases, it is highly unlikely that all the individuals of a population will be encountered during a study. Furthermore, it is unlikely that a constant fraction of the population is encountered over times, locations, or species to be compared. Hence, simple counts usually will not be good indices of population size. We recommend that detection probabilities (the probability of including an individual in a count) be estimated and incorporated into inference procedures. However, most techniques for estimating detection probability require moderate sample sizes, which may not be achievable when studying rare species. In order to improve the reliability of inferences from studies of rare species, we suggest two general approaches that researchers may wish to consider that incorporate the concept of imperfect detectability: (1) borrowing information about detectability or the other quantities of interest from other times, places, or species; and (2) using state variables other than abundance (e.g., species richness and occupancy). We illustrate these suggestions with examples and discuss the relative benefits and drawbacks of each approach.
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
TWO-PHASE FORMATION IN SOLUTIONS OF TOBACCO MOSAIC VIRUS AND THE PROBLEM OF LONG-RANGE FORCES
Oster, Gerald
1950-01-01
In a nearly salt-free medium, a dilute tobacco mosaic virus solution of rod-shaped virus particles of uniform length forms two phases; the bottom optically anisotropic phase has a greater virus concentration than has the top optically isotropic phase. For a sample containing particles of various lengths, the bottom phase contains longer particles than does the top and the concentrations top and bottom are nearly equal. The longer the particles the less the minimum concentration necessary for two-phase formation. Increasing the salt concentration increases the minimum concentration. The formation of two phases is explained in terms of geometrical considerations without recourse to the concept of long-range attractive forces. The minimum concentration for two-phase formation is that concentration at which correlation in orientation between the rod-shaped particles begins to take place. This concentration is determined by the thermodynamically effective size and shape of the particles as obtained from the concentration dependence of the osmotic pressure of the solutions measured by light scattering. The effective volume of the particles is introduced into the theory of Onsager for correlation of orientation of uniform size rods and good agreement with experiment is obtained. The theory is extended to a mixture of non-uniform size rods and to the case in which the salt concentration is varied, and agreement with experiment is obtained. The thermodynamically effective volume of the particles and its dependence on salt concentration are explained in terms of the shape of the particles and the electrostatic repulsion between them. Current theories of the hydration of proteins and of long-range forces are critically discussed. The bottom layer of freshly purified tobacco mosaic virus samples shows Bragg diffraction of visible light. The diffraction data indicate that the virus particles in solution form three-dimensional crystals approximately the size of crystalline inclusion bodies found in the cells of plants suffering from the disease. PMID:15422102
Kim, Gloria; Chu, Renxin; Yousuf, Fawad; Tauhid, Shahamat; Stazzone, Lynn; Houtchens, Maria K; Stankiewicz, James M; Severson, Christopher; Kimbrough, Dorlan; Quintana, Francisco J; Chitnis, Tanuja; Weiner, Howard L; Healy, Brian C; Bakshi, Rohit
2017-11-01
The subcortical deep gray matter (DGM) develops selective, progressive, and clinically relevant atrophy in progressive forms of multiple sclerosis (PMS). This patient population is the target of active neurotherapeutic development, requiring the availability of outcome measures. We tested a fully automated MRI analysis pipeline to assess DGM atrophy in PMS. Consistent 3D T1-weighted high-resolution 3T brain MRI was obtained over one year in 19 consecutive patients with PMS [15 secondary progressive, 4 primary progressive, 53% women, age (mean±SD) 50.8±8.0 years, Expanded Disability Status Scale (median, range) 5.0, 2.0-6.5)]. DGM segmentation applied the fully automated FSL-FIRST pipeline ( http://fsl.fmrib.ox.ac.uk ). Total DGM volume was the sum of the caudate, putamen, globus pallidus, and thalamus. On-study change was calculated using a random-effects linear regression model. We detected one-year decreases in raw [mean (95% confidence interval): -0.749 ml (-1.455, -0.043), p = 0.039] and annualized [-0.754 ml/year (-1.492, -0.016), p = 0.046] total DGM volumes. A treatment trial for an intervention that would show a 50% reduction in DGM brain atrophy would require a sample size of 123 patients for a single-arm study (one-year run-in followed by one-year on-treatment). For a two-arm placebo-controlled one-year study, 242 patients would be required per arm. The use of DGM fraction required more patients. The thalamus, putamen, and globus pallidus, showed smaller effect sizes in their on-study changes than the total DGM; however, for the caudate, the effect sizes were somewhat larger. DGM atrophy may prove efficient as a short-term outcome for proof-of-concept neurotherapeutic trials in PMS.
NASA Astrophysics Data System (ADS)
Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.
2017-06-01
In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Test evaluation of potential heatshield contamination of an outer planet probe's gas sampling system
NASA Technical Reports Server (NTRS)
Kessler, W. C.
1975-01-01
The feasibility of retaining the heat shield for outer planet probes was investigated as a potential source of atmospheric sample contamination by outgassing. The onboard instruments which are affected by the concept are the pressure sensor, temperature sensor, IR detector, nephelometer, and gas sampling instruments. It was found that: (1) The retention of the charred heatshield and the baseline atmospheric sampling concepts are compatible with obtaining noncontaminated atmospheric samples. (2) Increasing the sampling tube length so that it extends beyond the viscous boundary layer eliminates contamination of the atmospheric sample. (3) The potential for contamination increases with angle of attack.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Preliminary Structural Sizing and Alternative Material Trade Study of CEV Crew Module
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steve M.; Collier, Craig S.; Yarrington, Phillip W.
2007-01-01
This paper presents the results of a preliminary structural sizing and alternate material trade study for NASA s Crew Exploration Vehicle (CEV) Crew Module (CM). This critical CEV component will house the astronauts during ascent, docking with the International Space Station, reentry, and landing. The alternate material design study considers three materials beyond the standard metallic (aluminum alloy) design that resulted from an earlier NASA Smart Buyer Team analysis. These materials are graphite/epoxy composite laminates, discontinuously reinforced SiC/Al (DRA) composites, and a novel integrated panel material/concept known as WebCore. Using the HyperSizer (Collier Research and Development Corporation) structural sizing software and NASTRAN finite element analysis code, a comparison is made among these materials for the three composite CM concepts considered by the 2006 NASA Engineering and Safety Center Composite Crew Module project.
Small image laser range finder for planetary rover
NASA Technical Reports Server (NTRS)
Wakabayashi, Yasufumi; Honda, Masahisa; Adachi, Tadashi; Iijima, Takahiko
1994-01-01
A variety of technical subjects need to be solved before planetary rover navigation could be a part of future missions. The sensors which will perceive terrain environment around the rover will require critical development efforts. The image laser range finder (ILRF) discussed here is one of the candidate sensors because of its advantage in providing range data required for its navigation. The authors developed a new compact-sized ILRF which is a quarter of the size of conventional ones. Instead of the current two directional scanning system which is comprised of nodding and polygon mirrors, the new ILRF is equipped with the new concept of a direct polygon mirror driving system, which successfully made its size compact to accommodate the design requirements. The paper reports on the design concept and preliminary technical specifications established in the current development phase.
The spatial structure and temporal synchrony of water quality in stream networks
NASA Astrophysics Data System (ADS)
Abbott, Benjamin; Gruau, Gerard; Zarneske, Jay; Barbe, Lou; Gu, Sen; Kolbe, Tamara; Thomas, Zahra; Jaffrezic, Anne; Moatar, Florentina; Pinay, Gilles
2017-04-01
To feed nine billion people in 2050 while maintaining viable aquatic ecosystems will require an understanding of nutrient pollution dynamics throughout stream networks. Most regulatory frameworks such as the European Water Framework Directive and U.S. Clean Water Act, focus on nutrient concentrations in medium to large rivers. This strategy is appealing because large rivers integrate many small catchments and total nutrient loads drive eutrophication in estuarine and oceanic ecosystems. However, there is growing evidence that to understand and reduce downstream nutrient fluxes we need to look upstream. While headwater streams receive the bulk of nutrients in river networks, the relationship between land cover and nutrient flux often breaks down for small catchments, representing an important ecological unknown since 90% of global stream length occurs in catchments smaller than 15 km2. Though continuous monitoring of thousands of small streams is not feasible, what if we could learn what we needed about where and when to implement monitoring and conservation efforts with periodic sampling of headwater catchments? To address this question we performed repeat synoptic sampling of 56 nested catchments ranging in size from 1 to 370 km2 in western France. Spatial variability in carbon and nutrient concentrations decreased non-linearly as catchment size increased, with thresholds in variance for organic carbon and nutrients occurring between 36 and 68 km2. While it is widely held that temporal variance is higher in smaller streams, we observed consistent temporal variance across spatial scales and the ranking of catchments based on water quality showed strong synchrony in the water chemistry response to seasonal variation and hydrological events. We used these observations to develop two simple management frameworks. The subcatchment leverage concept proposes that mitigation and restoration efforts are more likely to succeed when implemented at spatial scales expressing high variability in the target parameter, which indicates decreased system inertia and demonstrates that alternative system responses are possible. The subcatchment synchrony concept suggests that periodic sampling of headwaters can provide valuable information about pollutant sources and inherent resilience in subcatchments and that if agricultural activity were redistributed based on this assessment of catchment vulnerability to nutrient loading, water quality could be improved while maintaining crop yields.
Influence of Casting Defects on S- N Fatigue Behavior of Ni-Al Bronze
NASA Astrophysics Data System (ADS)
Sarkar, Aritra; Chakrabarti, Abhishek; Nagesha, A.; Saravanan, T.; Arunmuthu, K.; Sandhya, R.; Philip, John; Mathew, M. D.; Jayakumar, T.
2015-02-01
Nickel-aluminum bronze (NAB) alloys have been used extensively in marine applications such as propellers, couplings, pump casings, and pump impellers due to their good mechanical properties such as tensile strength, creep resistance, and corrosion resistance. However, there have been several instances of in-service failure of the alloy due to high cycle fatigue (HCF). The present paper aims at characterizing the casting defects in this alloy through X-ray radiography and X-ray computed tomography into distinct defect groups having particular defect size and location. HCF tests were carried out on each defect group of as-cast NAB at room temperature by varying the mean stress. A significant decrease in the HCF life was observed with an increase in the tensile mean stress, irrespective of the defect size. Further, a considerable drop in the HCF life was observed with an increase in the size of defects and proximity of the defects to the surface. However, the surface proximity indicated by location of the defect in the sample was seen to override the influence of defect size and maximum cyclic stress. This leads to huge scatter in S- N curve. For a detailed quantitative analysis of defect size and location, an empirical model is developed which was able to minimize the scatter to a significant extent. Further, a concept of critical distance is proposed, beyond which the defect would not have a deleterious consequence on the fatigue behavior. Such an approach was found to be suitable for generating S- N curves for cast NAB.
Goligher, Ewan C; Amato, Marcelo B P; Slutsky, Arthur S
2017-09-01
In clinical trials of therapies for acute respiratory distress syndrome (ARDS), the average treatment effect in the study population may be attenuated because individual patient responses vary widely. This inflates sample size requirements and increases the cost and difficulty of conducting successful clinical trials. One solution is to enrich the study population with patients most likely to benefit, based on predicted patient response to treatment (predictive enrichment). In this perspective, we apply the precision medicine paradigm to the emerging use of extracorporeal CO 2 removal (ECCO 2 R) for ultraprotective ventilation in ARDS. ECCO 2 R enables reductions in tidal volume and driving pressure, key determinants of ventilator-induced lung injury. Using basic physiological concepts, we demonstrate that dead space and static compliance determine the effect of ECCO 2 R on driving pressure and mechanical power. This framework might enable prediction of individual treatment responses to ECCO 2 R. Enriching clinical trials by selectively enrolling patients with a significant predicted treatment response can increase treatment effect size and statistical power more efficiently than conventional enrichment strategies that restrict enrollment according to the baseline risk of death. To support this claim, we simulated the predicted effect of ECCO 2 R on driving pressure and mortality in a preexisting cohort of patients with ARDS. Our computations suggest that restricting enrollment to patients in whom ECCO 2 R allows driving pressure to be decreased by 5 cm H 2 O or more can reduce sample size requirement by more than 50% without increasing the total number of patients to be screened. We discuss potential implications for trial design based on this framework.
Associations of Quality of Life with Service Satisfaction in Psychotic Patients: A Meta-Analysis
Petkari, Eleni; Pietschnig, Jakob
2015-01-01
Background Quality of life (QoL) has gained increasing attention as a desired outcome of psychosocial treatments targeting psychotic patients. Yet, the relationship between the patients’ satisfaction with services and QoL has not been clearly established, perhaps due to the multidimensionality of the QoL concept and the variability in its assessment. Aim This is the first systematic meta-analysis of all available evidence assessing the relationship between QoL and service satisfaction. Methods: In all, 19 studies reporting data of 21 independent samples (N = 5,337) were included in the present meta-analysis. In moderator analyses, effects of age, sex, diagnoses (schizophrenia vs. other psychoses), treatment context (inpatients vs. outpatients), study design (cross-sectional vs. longitudinal), and QoL domain (subjective vs. health-related) were examined. Results Analyses revealed a highly significant medium-sized effect (r = .30, p < .001) for the associations of QoL and service satisfaction. Effect sizes were significantly stronger for subjective than health-related quality of life (r = .35 vs. r = .14, respectively). Moreover, associations with subjective QoL remained largely robust when accounting for moderating variables, although there was a trend of stronger associations for outpatients compared to inpatients. In contrast, effect sizes for health-related QoL were small and only observable for samples with longitudinal designs. Conclusion Associations between QoL and service satisfaction appear to be robust but are differentiated in regard to QoL domain. Our findings suggest that agents responsible for service design and implementation need to take the patients’ perception of the service adequacy for achieving QoL enhancement into account. PMID:26275139
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Artist concept of SIM PlanetQuest Artist Concept
2002-12-21
Artist's concept of the current mission configuration. SIM PlanetQuest (formerly called Space Interferometry Mission), currently under development, will determine the positions and distances of stars several hundred times more accurately than any previous program. This accuracy will allow SIM to determine the distances to stars throughout the galaxy and to probe nearby stars for Earth-sized planets. SIM will open a window to a new world of discoveries. http://photojournal.jpl.nasa.gov/catalog/PIA04248
The T/R modules for phased-array antennas
NASA Astrophysics Data System (ADS)
Peignet, Colette; Mancuso, Yves; Resneau, J. Claude
1990-09-01
The concept of phased array radar is critically dependent on the availability of compact, reliable and low power consuming Transmitter/Receiver (T/R) modules. An overview is given on two major programs actually at development stage within the Thomson group and on three major development axis (electrical concept optimization, packaging, and size reduction). The technical feasibility of the concept was proven and the three major axis were enlightened, based on reliability, power added efficiency, and RF tests optimization.
Quality Management and Qualification Needs 1: Quality and Personnel Concepts of SMEs in Europe.
ERIC Educational Resources Information Center
Koper, Johannes; Zaremba, Hans Jurgen
This book examines how quality management is implemented in small and medium-sized enterprises (SMEs) in Germany, Finland, Greece, Ireland, Portugal, Sweden, and the United Kingdom. It presents the survey results as two sector studies. Competitive and specialization tendencies of the sectors and company concepts of "quality" and…
Experienced and Novice Teachers' Concepts of Spatial Scale
ERIC Educational Resources Information Center
Jones, M. Gail; Tretter, Thomas; Taylor, Amy; Oppewal, Tom
2008-01-01
Scale is one of the thematic threads that runs through nearly all of the sciences and is considered one of the major prevailing ideas of science. This study explored novice and experienced teachers' concepts of spatial scale with a focus on linear sizes from very small (nanoscale) to very large (cosmic scale). Novice teachers included…
PopGen Fishbowl: A Free Online Simulation Model of Microevolutionary Processes
ERIC Educational Resources Information Center
Jones, Thomas C.; Laughlin, Thomas F.
2010-01-01
Natural selection and other components of evolutionary theory are known to be particularly challenging concepts for students to understand. To help illustrate these concepts, we developed a simulation model of microevolutionary processes. The model features all the components of Hardy-Weinberg theory, with population size, selection, gene flow,…
ERIC Educational Resources Information Center
Forbes-Lorman, Robin M.; Harris, Michelle A.; Chang, Wesley S.; Dent, Erik W.; Nordheim, Erik V.; Franzen, Margaret A.
2016-01-01
Understanding how basic structural units influence function is identified as a foundational/core concept for undergraduate biological and biochemical literacy. It is essential for students to understand this concept at all size scales, but it is often more difficult for students to understand structure-function relationships at the molecular…
Structures for the 3rd Generation Reusable Concept Vehicle
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2001-01-01
A major goal of NASA is to create an advance space transportation system that provides a safe, affordable highway through the air and into space. The long-term plans are to reduce the risk of crew loss to 1 in 1,000,000 missions and reduce the cost of Low-Earth Orbit by a factor of 100 from today's costs. A third generation reusable concept vehicle (RCV) was developed to assess technologies required to meet NASA's space access goals. The vehicle will launch from Cape Kennedy carrying a 25,000 lb. payload to the International Space Station (ISS). The system is an air breathing launch vehicle (ABLV) hypersonic lifting body with rockets and uses triple point hydrogen and liquid oxygen propellant. The focus of this paper is on the structural concepts and analysis methods used in developing the third generation reusable launch vehicle (RLV). Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.
Parametric Weight Comparison of Current and Proposed Thermal Protection System (TPS) Concepts
NASA Technical Reports Server (NTRS)
Myers, David E.; Martin, Carl J.; Blosser, Max L.
1999-01-01
A parametric weight assessment of advanced metallic panel, ceramic blanket, and ceramic tile thermal protection systems (TPS) was conducted using an implicit, one-dimensional (1 -D) thermal finite element sizing code. This sizing code contained models to ac- count for coatings, fasteners, adhesives, and strain isolation pads. Atmospheric entry heating profiles for two vehicles, the Access to Space (ATS) rocket-powered single-stage-to-orbit (SSTO) vehicle and a proposed Reusable Launch Vehicle (RLV), were used to ensure that the trends were not unique to a particular trajectory. Eight TPS concepts were compared for a range of applied heat loads and substructural heat capacities to identify general trends. This study found the blanket TPS concepts have the lightest weights over the majority of their applicable ranges, and current technology ceramic tiles and metallic TPS concepts have similar weights. A proposed, state-of-the-art metallic system which uses a higher temperature alloy and efficient multilayer insulation was predicted to be significantly lighter than the ceramic tile systems and approaches blanket TPS weights for higher integrated heat loads.
Low NOx heavy fuel combustor concept program. Phase 1: Combustion technology generation
NASA Astrophysics Data System (ADS)
Lew, H. G.; Carl, D. R.; Vermes, G.; Dezubay, E. A.; Schwab, J. A.; Prothroe, D.
1981-10-01
The viability of low emission nitrogen oxide (NOx) gas turbine combustors for industrial and utility application. Thirteen different concepts were evolved and most were tested. Acceptable performance was demonstrated for four of the combustors using ERBS fuel and ultralow NOx emissions were obtained for lean catalytic combustion. Residual oil and coal derived liquids containing fuel bound nitrogen (FBN) were also used at test fuels, and it was shown that staged rich/lean combustion was effective in minimizing the conversion of FBN to NOx. The rich/lean concept was tested with both modular and integral combustors. While the ceramic lined modular configuration produced the best results, the advantages of the all metal integral burners make them candidates for future development. An example of scaling the laboratory sized combustor to a 100 MW size engine is included in the report as are recommendations for future work.
Design studies of continuously variable transmissions for electric vehicles
NASA Technical Reports Server (NTRS)
Parker, R. J.; Loewenthal, S. H.; Fischer, G. K.
1981-01-01
Preliminary design studies were performed on four continuously variable transmission (CVT) concepts for use with a flywheel equipped electric vehicle of 1700 kg gross weight. Requirements of the CVT's were a maximum torque of 450 N-m (330 lb-ft), a maximum output power of 75 kW (100 hp), and a flywheel speed range of 28,000 to 14,000 rpm. Efficiency, size, weight, cost, reliability, maintainability, and controls were evaluated for each of the four concepts which included a steel V-belt type, a flat rubber belt type, a toroidal traction type, and a cone roller traction type. All CVT's exhibited relatively high calculated efficiencies (68 percent to 97 percent) over a broad range of vehicle operating conditions. Estimated weight and size of these transmissions were comparable to or less than equivalent automatic transmission. The design of each concept was carried through the design layout stage.
Low NOx heavy fuel combustor concept program. Phase 1: Combustion technology generation
NASA Technical Reports Server (NTRS)
Lew, H. G.; Carl, D. R.; Vermes, G.; Dezubay, E. A.; Schwab, J. A.; Prothroe, D.
1981-01-01
The viability of low emission nitrogen oxide (NOx) gas turbine combustors for industrial and utility application. Thirteen different concepts were evolved and most were tested. Acceptable performance was demonstrated for four of the combustors using ERBS fuel and ultralow NOx emissions were obtained for lean catalytic combustion. Residual oil and coal derived liquids containing fuel bound nitrogen (FBN) were also used at test fuels, and it was shown that staged rich/lean combustion was effective in minimizing the conversion of FBN to NOx. The rich/lean concept was tested with both modular and integral combustors. While the ceramic lined modular configuration produced the best results, the advantages of the all metal integral burners make them candidates for future development. An example of scaling the laboratory sized combustor to a 100 MW size engine is included in the report as are recommendations for future work.
76 FR 56141 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...
Kepler-20f -- An Earth-size World Artist Concept
2011-12-20
Kepler-20f is the closest object to the Earth in terms of size ever discovered. With an orbital period of 20 days and a surface temperature of 800 degrees Fahrenheit 430 degrees Celsius, it is too hot to host life, as we know it.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
NASA Astrophysics Data System (ADS)
Sponable, Jess M.
2000-01-01
Identifies spaceplane technical concepts proposed by both large and small companies. Highlights that the size and complexity of spaceplanes can be bounded by two well-defined concepts: the Lockheed Martin X-33/Venturestar concept and the Boeing Military Spaceplane vehicle. Also identifies a number of spaceplane concepts being proposed by small commercially financed companies. Reviews possible policy, regulatory, technology, financing and market catalysts that the government and Congress can establish to improve the business climate and encourage investment in spaceplane ventures. Argues that the government has a role and an obligation to help open the spaceways through a prudent mix of government investments and business incentives. .
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
Diet around conception and during pregnancy--effects on fetal and neonatal outcomes.
Kind, Karen L; Moore, Vivienne M; Davies, Michael J
2006-05-01
Substrate supply to the fetus is a major regulator of prenatal growth. Maternal nutrition influences the availability of nutrients for transfer to the fetus. Animal experiments demonstrate that restriction of maternal protein or energy intake can retard fetal growth. Effects of maternal nutrition vary with the type and timing of the restriction and the species studied. Maternal undernutrition before conception and/or in early pregnancy can alter fetal physiology in late gestation, and influence postnatal function, often without measurable effects on birth size. In contrast, to date, observational and intervention studies in humans provide limited support for a major role of maternal nutrition in determining birth size, except where women are quite malnourished. However, recent studies report associations between newborn size and the balance of macronutrients in women's diets in Western settings. Associations between maternal dietary composition and adult blood pressure of the offspring are also reported in human populations. Most studies in women have focused on dietary content or supplementation during mid-late pregnancy. Further investigation of how maternal dietary composition, before conception and throughout pregnancy, affects fetal physiology and health of the baby will increase the understanding of how maternal diet and nutritional status influence fetal, neonatal and longer-term outcomes.
Synthetic Vision Enhances Situation Awareness and RNP Capabilities for Terrain-Challenged Approaches
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III
2003-01-01
The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-Up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.
What is the optimum sample size for the study of peatland testate amoeba assemblages?
Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J
2017-10-01
Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.
Trujillo-de Santiago, Grissel; Portales-Cabrera, Cynthia Guadalupe; Portillo-Lara, Roberto; Araiz-Hernández, Diana; Del Barone, Maria Cristina; García-López, Erika; Rojas-de Gante, Cecilia; de Los Angeles De Santiago-Miramontes, María; Segoviano-Ramírez, Juan Carlos; García-Lara, Silverio; Rodríguez-González, Ciro Ángel; Alvarez, Mario Moisés; Di Maio, Ernesto; Iannace, Salvatore
2015-01-01
Foams are high porosity and low density materials. In nature, they are a common architecture. Some of their relevant technological applications include heat and sound insulation, lightweight materials, and tissue engineering scaffolds. Foams derived from natural polymers are particularly attractive for tissue culture due to their biodegradability and bio-compatibility. Here, the foaming potential of an extensive list of materials was assayed, including slabs elaborated from whole flour, the starch component only, or the protein fraction only of maize seeds. We used supercritical CO2 to produce foams from thermoplasticized maize derived materials. Polyethylene-glycol, sorbitol/glycerol, or urea/formamide were used as plasticizers. We report expansion ratios, porosities, average pore sizes, pore morphologies, and pore size distributions for these materials. High porosity foams were obtained from zein thermoplasticized with polyethylene glycol, and from starch thermoplasticized with urea/formamide. Zein foams had a higher porosity than starch foams (88% and 85%, respectively) and a narrower and more evenly distributed pore size. Starch foams exhibited a wider span of pore sizes and a larger average pore size than zein (208.84 vs. 55.43 μm2, respectively). Proof-of-concept cell culture experiments confirmed that mouse fibroblasts (NIH 3T3) and two different prostate cancer cell lines (22RV1, DU145) attached to and proliferated on zein foams. We conducted screening and proof-of-concept experiments on the fabrication of foams from cereal-based bioplastics. We propose that a key indicator of foamability is the strain at break of the materials to be foamed (as calculated from stress vs. strain rate curves). Zein foams exhibit attractive properties (average pore size, pore size distribution, and porosity) for cell culture applications; we were able to establish and sustain mammalian cell cultures on zein foams for extended time periods.
Trujillo-de Santiago, Grissel; Portales-Cabrera, Cynthia Guadalupe; Portillo-Lara, Roberto; Araiz-Hernández, Diana; Del Barone, Maria Cristina; García-López, Erika; Rojas-de Gante, Cecilia; de los Angeles De Santiago-Miramontes, María; Segoviano-Ramírez, Juan Carlos; García-Lara, Silverio; Rodríguez-González, Ciro Ángel; Alvarez, Mario Moisés; Di Maio, Ernesto; Iannace, Salvatore
2015-01-01
Background Foams are high porosity and low density materials. In nature, they are a common architecture. Some of their relevant technological applications include heat and sound insulation, lightweight materials, and tissue engineering scaffolds. Foams derived from natural polymers are particularly attractive for tissue culture due to their biodegradability and bio-compatibility. Here, the foaming potential of an extensive list of materials was assayed, including slabs elaborated from whole flour, the starch component only, or the protein fraction only of maize seeds. Methodology/Principal Findings We used supercritical CO2 to produce foams from thermoplasticized maize derived materials. Polyethylene-glycol, sorbitol/glycerol, or urea/formamide were used as plasticizers. We report expansion ratios, porosities, average pore sizes, pore morphologies, and pore size distributions for these materials. High porosity foams were obtained from zein thermoplasticized with polyethylene glycol, and from starch thermoplasticized with urea/formamide. Zein foams had a higher porosity than starch foams (88% and 85%, respectively) and a narrower and more evenly distributed pore size. Starch foams exhibited a wider span of pore sizes and a larger average pore size than zein (208.84 vs. 55.43 μm2, respectively). Proof-of-concept cell culture experiments confirmed that mouse fibroblasts (NIH 3T3) and two different prostate cancer cell lines (22RV1, DU145) attached to and proliferated on zein foams. Conclusions/Significance We conducted screening and proof-of-concept experiments on the fabrication of foams from cereal-based bioplastics. We propose that a key indicator of foamability is the strain at break of the materials to be foamed (as calculated from stress vs. strain rate curves). Zein foams exhibit attractive properties (average pore size, pore size distribution, and porosity) for cell culture applications; we were able to establish and sustain mammalian cell cultures on zein foams for extended time periods. PMID:25859853
ERIC Educational Resources Information Center
Beasley, Emily Kristin; Garn, Alex C.
2013-01-01
This study examined the relationships among identified regulation, physical self-concept, global self-concept, and leisure-time physical activity with a sample of middle and high school girls (N = 319) enrolled in physical education. Based on Marsh's theory of self-concept, it was hypothesized that a) physical self-concept would mediate the…
Illustrating Sampling Distribution of a Statistic: Minitab Revisited
ERIC Educational Resources Information Center
Johnson, H. Dean; Evans, Marc A.
2008-01-01
Understanding the concept of the sampling distribution of a statistic is essential for the understanding of inferential procedures. Unfortunately, this topic proves to be a stumbling block for students in introductory statistics classes. In efforts to aid students in their understanding of this concept, alternatives to a lecture-based mode of…
NASA Astrophysics Data System (ADS)
Rotjanakunnatam, Boonthida; Chayaburakul, Kanokporn
2018-01-01
The aims of this research study was to develop the conceptual instructional design with the Inquiry-Based Instruction Model (IBIM) of secondary students at the 10th grade level on Digestion System and Cellular Degradation issue using both oxygen and oxygen-degrading cellular nutrients were designed instructional model with a sample size of 45 secondary students at the 10th Grade level. Data were collected by asking students to do a questionnaire pre and post learning processes. The questionnaire consists of two main parts that composed of students' perception questionnaire and the questionnaire that asked the question answer concept for the selected questionnaire. The 10-item Conceptual Thinking Test (CTT) was assessed students' conceptual thinking evaluation that it was covered in two main concepts, namely; Oxygen degradation nutrients and degradation nutrients without oxygen. The data by classifying students' answers into 5 groups and measuring them in frequency and a percentage of students' performances of their learning pre and post activities with the Inquiry-Based Instruction Model were analyzed as a tutorial. The results of this research found that: After the learning activities with the IBIM, most students developed concepts of both oxygen and oxygen-degrading cellular nutrients in the correct, complete and correct concept, and there are a number of students who have conceptual ideas in the wrong concept, and no concept was clearly reduced. However, the results are still found that; some students have some misconceptions, such as; the concept of direction of electron motion and formation of the ATP of bioactivities of life. This cause may come from the nature of the content, the complexity, the continuity, the movement, and the time constraints only in the classroom. Based on this research, it is suggested that some students may take some time, and the limited time in the classroom to their learning activity with content creation content binding and dramatic storytelling increases in a relaxed classroom learning environment.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
ERIC Educational Resources Information Center
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Kim, Manuela; Stripeikis, Jorge; Iñón, Fernando; Tudino, Mabel
2007-05-15
A simple and sensitive HPLC post-derivatization method with colorimetric detection has been developed for the determination of N-nitroso glyphosate in samples of technical glyphosate. Separation of the analyte was accomplished using an anionic exchange resin (2.50mmx4.00mm i.d., 15mum particle size, functional group: quaternary ammonium salt) with Na(2)SO(4) 0.0075M (pH 11.5) (flow rate: 1.0mLmin(-1)) as mobile phase. After separation, the eluate was derivatized with a colorimetric reagent containing sulfanilamide 0.3% (w/v), [N-(1-naphtil)ethilendiamine] 0.03% (w/v) and HCl 4.5M in a thermostatized bath at 95 degrees C. Detection was performed at 546nm. All stages of the analytical procedure were optimized taking into account the concept of analytical minimalism: less operation times and costs; lower sample, reagents and energy consumption and minimal waste. The limit of detection (k=3) calculated for 10 blank replicates was 0.04mgL(-1) (0.8mgkg(-1)) in the solid sample which is lower than the maximum tolerable accepted by the Food and Agriculture Organization of the United Nations.
Principles and Applications of the qPlus Sensor
NASA Astrophysics Data System (ADS)
Giessibl, Franz J.
The concept of the atomic force microscope (AFM) is a very simple one: map the surface of a sample by a sharp probe that scans over the surface similar to the finger of a blind person that reads Braille characters. In AFM, the role of that finger is taken by the probe tip that senses the presence of the sample surface by detecting the force between the tip of the probe and a sample. The qPlus sensor is a self sensing cantilever based on a quartz tuning fork that supplements the traditional microfabricated cantilevers made of silicon. Quartz tuning forks are used in the watch industry in quantities of billions annually, with the positive effects on quality and perfection. Three properties of these quartz-based sensors simplify the AFM significantly: (1) the piezoelectricity of quartz allows simple self sensing, (2) the mechanical properties of quartz show very small variations with temperature, and (3) the given stiffness of many quartz tuning forks is close to the ideal stiffness of cantilevers. The key properties of the qPlus sensor are a large stiffness that allows small amplitude operation, the large size that allows to mount single-crystal probe tips, and the self-sensing piezoelectric detection mechanism.
Characterization studies of prototype ISOL targets for the RIA
NASA Astrophysics Data System (ADS)
Greene, John P.; Burtseva, Tatiana; Neubauer, Janelle; Nolen, Jerry A.; Villari, Antonio C. C.; Gomes, Itacil C.
2005-12-01
Targets employing refractory compounds are being developed for the rare isotope accelerator (RIA) facility to produce ion species far from stability. With the 100 kW beams proposed for the production targets, dissipation of heat becomes a challenging issue. In our two-step target design, neutrons are generated in a refractory primary target, inducing fission in the surrounding uranium carbide. The interplay of density, grain size, thermal conductivity and diffusion properties of the UC2 needs to be well understood before fabrication. Thin samples of uranium carbide were prepared for thermal conductivity measurements using an electron beam to heat the sample and an optical pyrometer to observe the thermal radiation. Release efficiencies and independent thermal analysis on these samples are being undertaken at Oak Ridge National Laboratory (ORNL). An alternate target concept for RIA, the tilted slab approach promises to be simple with fast ion release and capable of withstanding high beam intensities while providing considerable yields via spallation. A proposed small business innovative research (SBIR) project will design a prototype tilted target, exploring the materials needed for fabrication and testing at an irradiation facility to address issues of heat transfer and stresses within the target.
Advancing microwave technology for dehydration processing of biologics.
Cellemme, Stephanie L; Van Vorst, Matthew; Paramore, Elisha; Elliott, Gloria D
2013-10-01
Our prior work has shown that microwave processing can be effective as a method for dehydrating cell-based suspensions in preparation for anhydrous storage, yielding homogenous samples with predictable and reproducible drying times. In the current work an optimized microwave-based drying process was developed that expands upon this previous proof-of-concept. Utilization of a commercial microwave (CEM SAM 255, Matthews, NC) enabled continuous drying at variable low power settings. A new turntable was manufactured from Ultra High Molecular Weight Polyethylene (UHMW-PE; Grainger, Lake Forest, IL) to provide for drying of up to 12 samples at a time. The new process enabled rapid and simultaneous drying of multiple samples in containment devices suitable for long-term storage and aseptic rehydration of the sample. To determine sample repeatability and consistency of drying within the microwave cavity, a concentration series of aqueous trehalose solutions were dried for specific intervals and water content assessed using Karl Fischer Titration at the end of each processing period. Samples were dried on Whatman S-14 conjugate release filters (Whatman, Maidestone, UK), a glass fiber membrane used currently in clinical laboratories. The filters were cut to size for use in a 13 mm Swinnex(®) syringe filter holder (Millipore(™), Billerica, MA). Samples of 40 μL volume could be dehydrated to the equilibrium moisture content by continuous processing at 20% with excellent sample-to-sample repeatability. The microwave-assisted procedure enabled high throughput, repeatable drying of multiple samples, in a manner easily adaptable for drying a wide array of biological samples. Depending on the tolerance for sample heating, the drying time can be altered by changing the power level of the microwave unit.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Hydropedology of a mildly-arid loess covered area, southern Israel
NASA Astrophysics Data System (ADS)
Yair, Aaron; Goldshleger, Naftali
2016-04-01
Extensive loess covered areas characterize the mildly arid areas of western Israel, where average annual rainfall is 280 mm. Hydrological data available point to a peculiar hydrological behavior of the ephemeral streams. The frequency of channel flow is very high. Four to eight flows are recorded annually. However, even in extreme rain events peak discharges are extremely low representing 0.002-0.005% of the rain amount received by the basin at peak flow. In addition, hydrographs are usually characterized by very steep rising and falling limbs, representative of saturated or nearly saturated areas, extending over a limited part of the watershed. Following this observation we advanced the hypothesis that storm channel runoff originated in the channel itself, with negligible contribution from the adjoining hillslopes. The study was based on two complementary approaches. The hydrological approach was based on the detailed analysis of rainfall-runoff relationships in a small watershed (11 km2). The second approach was based on the toposequence concept. According to this concept soil's properties are closely related to the position of a soil along a slope. Constituents and water lost by the upper part of the slope accumulate in its lower part, which is richer in clay and better leached. Several boreholes were dug along a hillslope 400 m long. Soil samples were collected for chemical and particle size analysis. In addition, samples for soil moisture data were taken following each major rain event. Chemical data obtained show no significant observable difference in the downslope direction. Similar results were also obtained for the particle size distribution and soil moisture content. However, particle size distribution in the active channel reveals very high clay content down to 60 cm. Data obtained lead to two main conclusions. 1. Data presented perfectly fit the concept of "Partial Area Contribution", in its narrow sense, as it presents an extreme case of hydrological discontinuity at the hillslope-channel interface. The high water absorption of the clayey alluvium limits infiltration depth resulting in a very high frequency of channel flow, even at low intensity rain events. The limited wet channel area is responsible for the low peak discharges, and for the steep shapes of most hydrographs. 2. The lack of pedological trends in the downslope direction is an additional indication of the limited connectivity between the hillslopes and the adjoining channel. The limited connectivity is attributed to the prevalence of low rain intensities in the study area. 90-95% of the rains are below 10 mm/hr., whereas final infiltration rates of the loamy-clayey soils are 10-15 mm/hr. higher rain intensities do exist, but there duration is extremely short, drastically limiting flow distances and overland flow contribution to the channel. The present study is also relevant to our understanding of pedological processes in dry-land areas. The high frequency of the intermittent low intensity rainstorms limits runoff generation and flow distances, and casts doubt on the general application of the toposequence approach.
P and W propulsion systems studies results/status
NASA Technical Reports Server (NTRS)
Smith, Martin G., Jr.; Champagne, George A.
1992-01-01
The topics covered include the following: Pratt and Whitney (P&W) propulsion systems studies - NASA funded efforts to date; P&W engine concepts; P&W combustor focus - rich burn quick quench (RBQQ) concept; mixer ejector nozzle concept - large flow entrainment reduces jet noise; technology impact on NO(x) emissions - mature RBQQ combustor reduces NO(x) up to 85 percent; technology impact on sideline noise characteristics of Mach 2.4 turbine bypass engines (TBE's) - 600 lb/sec airflow size; technology impact on takeoff gross weight (TOGW) - provides up to 12 percent TOGW reduction; HSCT quiet engine concepts; TBE inlet valve/ejector nozzle concept schematic; mixed flow turbofan study; and exhaust nozzle conceptual design.
The topomer-sampling model of protein folding
Debe, Derek A.; Carlson, Matt J.; Goddard, William A.
1999-01-01
Clearly, a protein cannot sample all of its conformations (e.g., ≈3100 ≈ 1048 for a 100 residue protein) on an in vivo folding timescale (<1 s). To investigate how the conformational dynamics of a protein can accommodate subsecond folding time scales, we introduce the concept of the native topomer, which is the set of all structures similar to the native structure (obtainable from the native structure through local backbone coordinate transformations that do not disrupt the covalent bonding of the peptide backbone). We have developed a computational procedure for estimating the number of distinct topomers required to span all conformations (compact and semicompact) for a polypeptide of a given length. For 100 residues, we find ≈3 × 107 distinct topomers. Based on the distance calculated between different topomers, we estimate that a 100-residue polypeptide diffusively samples one topomer every ≈3 ns. Hence, a 100-residue protein can find its native topomer by random sampling in just ≈100 ms. These results suggest that subsecond folding of modest-sized, single-domain proteins can be accomplished by a two-stage process of (i) topomer diffusion: random, diffusive sampling of the 3 × 107 distinct topomers to find the native topomer (≈0.1 s), followed by (ii) intratopomer ordering: nonrandom, local conformational rearrangements within the native topomer to settle into the precise native state. PMID:10077555
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Wu, K, Chauncey; Smith, Russell W.
2008-01-01
The Lunar Architecture Team Phase 2 study defined and assessed architecture options for a Lunar Outpost at the Moon's South Pole. The Habitation Focus Element Team was responsible for developing concepts for all of the Habitats and pressurized logistics modules particular to each of the architectures, and defined the shapes, volumes and internal layouts considering human factors, surface operations and safety requirements, as well as Lander mass and volume constraints. The Structures Subsystem Team developed structural concepts, sizing estimates and mass estimates for the primary Habitat structure. In these studies, the primary structure was decomposed into a more detailed list of components to be sized to gain greater insight into concept mass contributors. Structural mass estimates were developed that captured the effect of major design parameters such as internal pressure load. Analytical and empirical equations were developed for each structural component identified. Over 20 different hard-shell, hybrid expandable and inflatable soft-shell Habitat and pressurized logistics module concepts were sized and compared to assess structural performance and efficiency during the study. Habitats were developed in three categories; Mini Habs that are removed from the Lander and placed on the Lunar surface, Monolithic habitats that remain on the Lander, and Habitats that are part of the Mobile Lander system. Each category of Habitat resulted in structural concepts with advantages and disadvantages. The same modular shell components could be used for the Mini Hab concept, maximizing commonality and minimizing development costs. Larger Habitats had higher volumetric mass efficiency and floor area than smaller Habitats (whose mass was dominated by fixed items such as domes and frames). Hybrid and pure expandable Habitat structures were very mass-efficient, but the structures technology is less mature, and the ability to efficiently package and deploy internal subsystems remains an open issue.
Hofer, Philipp; Fiegl, Heidi; Angerer, Justina; Mueller-Holzner, Elisabeth; Chamson, Martina; Klocker, Helmut; Steiner, Eberhardt; Hauffe, Helga; Zschocke, Johannes; Goebel, Georg
2014-01-01
The knowledge about the quality of samples and associated clinical data in biospecimen collections is a premise of clinical research. An electronic biosample register aims to facilitate the discovery of information about biosample collections in a hospital. Moreover, it might improve scientific collaboration and research quality through a shared access to harmonized sample collection description data. The aim of this paper is to present a concept of a web-based biosample register of the existing biosample collections at the Medical University of Innsbruck. A uniform description model is built based on an analysis of the sample collection data of independent sample management systems from two departments within the hospital. An extended set of attributes of the minimum dataset used by the Swedish sample collection register (MIABIS) has been applied to all biosample collections as a common description model. The results of the analysis and the data model are presented together with a first concept of a sample collection search register.
Electrical and magnetic properties of nano-sized magnesium ferrite
NASA Astrophysics Data System (ADS)
T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.
2015-02-01
Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Table-sized matrix model in fractional learning
NASA Astrophysics Data System (ADS)
Soebagyo, J.; Wahyudin; Mulyaning, E. C.
2018-05-01
This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.
NASA Astrophysics Data System (ADS)
Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.
2016-12-01
Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Quantification of soil structure based on Minkowski functions
NASA Astrophysics Data System (ADS)
Vogel, H.-J.; Weller, U.; Schlüter, S.
2010-10-01
The structure of soils and other geologic media is a complex three-dimensional object. Most of the physical material properties including mechanical and hydraulic characteristics are immediately linked to the structure given by the pore space and its spatial distribution. It is an old dream and still a formidable challenge to relate structural features of porous media to their functional properties. Using tomographic techniques, soil structure can be directly observed at a range of spatial scales. In this paper we present a scale-invariant concept to quantify complex structures based on a limited set of meaningful morphological functions. They are based on d+1 Minkowski functionals as defined for d-dimensional bodies. These basic quantities are determined as a function of pore size or aggregate size obtained by filter procedures using mathematical morphology. The resulting Minkowski functions provide valuable information on the size of pores and aggregates, the pore surface area and the pore topology having the potential to be linked to physical properties. The theoretical background and the related algorithms are presented and the approach is demonstrated for the pore structure of an arable soil and the pore structure of a sand both obtained by X-ray micro-tomography. We also analyze the fundamental problem of limited resolution which is critical for any attempt to quantify structural features at any scale using samples of different size recorded at different resolutions. The results demonstrate that objects smaller than 5 voxels are critical for quantitative analysis.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
NASA Astrophysics Data System (ADS)
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
Structural Configuration Systems Analysis for Advanced Aircraft Fuselage Concepts
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D.
2016-01-01
Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation.
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Let's Learn about Colors, Shapes, and Sizes. Preschool-2nd Grade.
ERIC Educational Resources Information Center
Courson, Diana
The focus of this booklet is on matching, recognizing, and identifying colors, shapes, and sizes. Concept development as well as vocabulary learning are goals. A variety of materials are used in the activities. Activities for children from preschool through grade 2 are grouped by topic. (MNS)
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Probability of coincidental similarity among the orbits of small bodies - I. Pairing
NASA Astrophysics Data System (ADS)
Jopek, Tadeusz Jan; Bronikowska, Małgorzata
2017-09-01
Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Herzog, Sereina A; Low, Nicola; Berghold, Andrea
2015-06-19
The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Jorgenson, Andrew K; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.
A generative model for scientific concept hierarchies.
Datta, Srayan; Adar, Eytan
2018-01-01
In many scientific disciplines, each new 'product' of research (method, finding, artifact, etc.) is often built upon previous findings-leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution.
A generative model for scientific concept hierarchies
Adar, Eytan
2018-01-01
In many scientific disciplines, each new ‘product’ of research (method, finding, artifact, etc.) is often built upon previous findings–leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution. PMID:29474409
A Vygotskian analysis of preservice teachers' conceptions of dissolving and density
NASA Astrophysics Data System (ADS)
Shaker elJishi, Ziad
The purpose of this study was to examine the content knowledge of 64 elementary preservice teachers for the concepts of dissolving and density. Vygotsky's (1987) theory of concept development was used as a framework to categorize concepts and misconceptions resulting from evidences of preservice teacher knowledge including pre/post concept maps, writing artifacts, pre/post face-to-face interviews, examination results, and drawings. Statistical significances were found for pre- and post-concept map scores for dissolving (t = -5.773, p < 0.001) and density (t = -2.948, p = 0.005). As measured using Cohen's d values, increases in mean scores showed a medium-large effect size for (dissolving) and a small effect size for density. The triangulated results using all data types revealed that preservice teachers held several robust misconceptions about dissolving including the explanation that dissolving is a breakdown of substances, a formation of mixtures, and/or involves chemical change. Most preservice teachers relied on concrete concepts (such as rate or solubility) to explain dissolving. With regard to density, preservice teachers held two robust misconceptions including confusing density with buoyancy to explain the phenomena of floating and sinking, and confusing density with heaviness, mass, and weight. Most preservice teachers gained one concept for density, the density algorithm. Most preservice teachers who participated in this study demonstrated Vygotsky's notion of complex thinking and were unable to transform their thinking to the scientific conceptual level. That is, they were unable to articulate an understanding of either the process of dissolving or density that included a unified system of knowledge characterized as abstract, generalizable and hierarchical. Results suggest the need to instruct preservice elementary science teachers about the particulate nature of matter, intermolecular forces, and the Archimedes' principle.
Schmidt, Christian; Öner, Alper; Mann, Miriam; Krockenberger, Katja; Abbondanzieri, Melanie; Brandewiede, Bernard; Brüge, Armin; Hostenkamp, Gisela; Kaiser, Axel; Neumeyer, Henriette; Ziegler, Andreas
2018-02-20
Cardiovascular diseases are the major cause of death globally and represent a major economic burden on health care systems. Positive effects of disease management programs have been shown for patients with heart failure (HF). Remote monitoring and telemonitoring with active intervention are beneficial in atrial fibrillation (AF) and therapy-resistant hypertension (TRH), respectively. For these patients, we have developed a novel integrated care concept (NICC) which combines telemedicine with intensive support by a care center, including a call center, an integrated care network including inpatient and outpatient care providers and guideline therapy for patients. The aim of the study is to demonstrate the superiority of NICC over guideline therapy alone. The trial is designed as open-label, bi-center, parallel-group design with two groups and a blinded observer. Patients will be included if they are either inpatients or if they are referred to the outpatient clinic of the hospitals by their treating physician. Randomization will be done individually with stratification by cardiovascular disease (AF, HF, TRH), center and admission type. Primary endpoints are based on the 1-year observation period after randomization. The first primary endpoint is the composite endpoint consisting of mortality, stroke and myocardial infarction. The number of hospitalizations form the second primary endpoint. The third primary endpoint is identical to the first primary endpoint plus cardiac decompensation. Adjustments for multiple testing are done using a fall-back strategy. Secondary endpoints include patient adherence, health care costs, quality of life, and safety. A sample size of 2930 gives 80% power at the two-sided 2.5% test level for the first primary endpoint. The power for the second primary endpoint is 99.8% at this sample size, and it is 80% with 1086 patients. This study will inform care providers whether quality of care can be improved by an integrated care concept providing telemedicine through a round-the-clock call center approach. We expect that cost of the NICC will be lower than standard care because of reduced hospitalizations. If the study has a positive result, NICC is planned to be immediately rolled out in the federal state of Mecklenburg-West Pomerania and other federal states in Germany. The trial will also guide additional research to disentangle the effects of this complex intervention. DRKS, ID: DRKS00013124 . Registered on 5 October 2017; ClinicalTrials.gov , ID: NCT03317951. Registered on 17 October 2017.
Using an FPLC to promote active learning of the principles of protein structure and purification.
Robinson, Rebekah L; Neely, Amy E; Mojadedi, Wais; Threatt, Katie N; Davis, Nicole Y; Weiland, Mitch H
2017-01-02
The concepts of protein purification are often taught in undergraduate biology and biochemistry lectures and reinforced during laboratory exercises; however, very few reported activities allow students to directly gain experience using modern protein purification instruments, such as Fast Protein Liquid Chromatography (FPLC). This laboratory exercise uses size exclusion chromatography (SEC) and ion exchange (IEX) chromatography to separate a mixture of four different proteins. Students use an SEC chromatogram and corresponding SDS-PAGE gel to understand how protein conformations change under different conditions (i.e. native and non-native). Students explore strategies to separate co-eluting proteins by IEX chromatography. Using either cation or anion exchange, one protein is bound to the column while the other is collected in the flow-through. In this exercise, undergraduate students gain hands-on experience with experimental design, buffer and sample preparation, and implementation of instrumentation that is commonly used by experienced researchers while learning and applying the fundamental concepts of protein structure, protein purification, and SDS-PAGE. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):60-68, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
Oral literacy demand of preventive dental visits in a pediatric medical office: a pilot study.
Kranz, Ashley M; Pahel, Bhavna T; Rozier, R Gary
2013-01-01
The purpose of this study was to examine the oral literacy demands placed on parents of young children during preventive dental visits in a pediatric medical office. Transcripts of audio recordings for 15 pediatric medical visits were analyzed to assess the oral literacy demand of the visit, as measured by use of terminology, language complexity, and structural characteristics of the dialogue. Parent-completed surveys were used to determine recall of dental concepts discussed during the visit. Pearson's correlation coefficients were calculated to identify relationships among these measures and parental recall of the visit. Visits were interactive and used limited jargon and uncomplicated language. Oral literacy demand measures were associated with each other. Parental recall of the visit was associated with measures of high oral literacy demand. Assessing measures of oral literacy demand is a novel method for examining provider communication used during preventive dental visits in a pediatric medical office. Providers displayed low oral literacy demand when communicating with parents. Parental recall of dental concepts, however, was associated unexpectedly with higher oral literacy demand. Further research should examine a larger sample size and the effect of measures of oral literacy demand among low- and high-literacy patients.
Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.
Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth
2018-01-01
Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.
Micromachined edge illuminated optically transparent automotive light guide panels
NASA Astrophysics Data System (ADS)
Ronny, Rahima Afrose; Knopf, George K.; Bordatchev, Evgueni; Tauhiduzzaman, Mohammed; Nikumb, Suwas
2012-03-01
Edge-lit backlighting has been used extensively for a variety of small and medium-sized liquid crystal displays (LCDs). The shape, density and spatial distribution pattern of the micro-optical elements imprinted on the surface of the flat light-guide panel (LGP) are often "optimized" to improve the overall brightness and luminance uniformity. A similar concept can be used to develop interior convenience lighting panels and exterior tail lamps for automotive applications. However, costly diffusive sheeting and brightness enhancement films are not be considered for these applications because absolute luminance uniformity and the minimization of Moiré fringe effects are not significant factors in assessing quality of automotive lighting. A new design concept that involves micromilling cylindrical micro-optical elements on optically transparent plastic substrates is described in this paper. The variable parameter that controls illumination over the active regions of the panel is the depth of the individual cylindrical micro-optical elements. LightTools™ is the optical simulation tool used to explore how changing the micro-optical element depth can alter the local and global luminance. Numerical simulation and microfabrication experiments are performed on several (100mmx100mmx6mm) polymethylmethacrylate (PMMA) test samples in order to verify the illumination behavior.
Genetics and Cinema: Personal Misconceptions That Constitute Obstacles to Learning
ERIC Educational Resources Information Center
Muela, Francisco Javier; Abril, Ana María
2014-01-01
The primary objective of this paper is to find out whether the genetic concepts conveyed by cinema could encourage students' personal misconceptions in this area. To that end, two sources of conceptions were compared: the students' personal concepts (from a consolidated bibliography and from an experimental sample) and the concepts conveyed by…
The Development of the Sexual Self-Concept Inventory for Early Adolescent Girls
ERIC Educational Resources Information Center
O'Sullivan, Lucia F.; Meyer-Bahlburg, Heino F. L.; McKeague, Ian W.
2006-01-01
The Sexual Self-Concept Inventory (SSCI) was developed to assess sexual self-concept in an ethnically diverse sample of urban early adolescent girls. Three scales (Sexual Arousability, Sexual Agency, and Negative Sexual Affect) were shown to be distinct and reliable dimensions of girls' sexual self-concepts. Validity was established through…
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
How Many "Friends" Do You Need? Teaching Students How to Network Using Social Media
ERIC Educational Resources Information Center
Sacks, Michael Alan; Graves, Nikki
2012-01-01
Student reliance on social media is undeniable. However, while we largely regard social media as a new phenomena, the concepts underlying it come directly from social network theory in sociology and organizational behavior. In this article, the authors examine how the social network concepts of size, quality, complexity, diffusion, and distance…
The Effects of Word-Learning Biases on Children's Concept of Angle
ERIC Educational Resources Information Center
Gibson, Dominic J.; Congdon, Eliza L.; Levine, Susan C.
2015-01-01
Despite evidence that young children are sensitive to differences in angle measure, older students frequently struggle to grasp this important mathematical concept. When making judgments about the size of angles, children often rely on erroneous dimensions such as the length of the angles' sides. The present study tested the possibility that…
DOT National Transportation Integrated Search
2012-10-31
This zip file contains 45 files of data to support FHWA-JPO-13-063 Response, Emergency Staging, Communications, Uniform Management, and Evacuation (R.E.S.C.U.M.E.) : Concept of Operations. Zip size is 9.9 MB. The files have been uploaded as-is; no fu...
Projectile Activity for the Laboratory: A Safe and Inexpensive Approach to Several Concepts
ERIC Educational Resources Information Center
Farkas, N.; Ramsier, R. D.
2006-01-01
We present a simple laboratory activity for introductory-level physics students which involves rolling balls down pipes and analysing their subsequent flight trajectories. Using balls of equal size but different mass allows students to confront their misconceptions of a mass dependence of the exit speed of the balls from the pipes. The concepts of…
NASA Astrophysics Data System (ADS)
Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan
2002-02-01
Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.
Gussy, M; Kilpatrick, N
2006-09-01
To pilot the use of a multidimensional/hierarchical measurement instrument called the self-description questionnaire II to determine whether specific areas of self-concept in a group of adolescents with cleft lip and palate would be affected by their condition when compared with a normative sample. The self-concept of 23 adolescents with a cleft of the lip and palate was compared to an Australian normative sample. Adolescents attending the dental department of a paediatric hospital in Australia. The main outcome measure was a self-report questionnaire (102 items) with 10 domain-specific scales and a global measure of general self-concept. When compared to the normative data the study group showed significant differences in 4 of the 11 domain-specific scales: Parent Relations (P < 0.001), Physical Abilities (P < 0.001), Opposite-Sex Relations (P < 0.01) and Physical Appearance (P < 0.01) self-concepts. These differences were in a positive direction. Global self-concept as measured by the General Self scale was not significantly different from the normative sample. These results suggest that adolescents with clefts of the lip and palate have normative if not better self-concept than their peers. The study also suggests that having a cleft of the lip and palate has specific rather than broad associations with psychosocial adjustment. This justifies the use of instruments designed to assess specific areas of self-concept rather than more global measures.
NASA Astrophysics Data System (ADS)
Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine
2018-05-01
The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.
Lower-Cost, Relocatable Lunar Polar Lander and Lunar Surface Sample Return Probes
NASA Technical Reports Server (NTRS)
Amato, G. Michael; Garvin, James B.; Burt, I. Joseph; Karpati, Gabe
2011-01-01
Key science and exploration objectives of lunar robotic precursor missions can be achieved with the Lunar Explorer (LEx) low-cost, robotic surface mission concept described herein. Selected elements of the LEx concept can also be used to create a lunar surface sample return mission that we have called Boomerang
An alternative view of continuous forest inventories
Francis A. Roesch
2008-01-01
A generalized three-dimensional concept of continuous forest inventories applicable to all common forest sample designs is presented and discussed. The concept recognizes the forest through time as a three-dimensional population, two dimensions in land area and the third in time. The sample is selected from a finite three-dimensional partitioning of the population. The...
Using large volume samplers for the monitoring of particle bound micro pollutants in rivers
NASA Astrophysics Data System (ADS)
Kittlaus, Steffen; Fuchs, Stephan
2015-04-01
The requirements of the WFD as well as substance emission modelling at the river basin scale require stable monitoring data for micro pollutants. The monitoring concepts applied by the local authorities as well as by many scientists use single sampling techniques. Samples from water bodies are usually taken in volumes of about one litre and depending on predetermined time steps or through discharge thresholds. For predominantly particle bound micro pollutants the small sample size of about one litre results in a very small amount of suspended particles. To measure micro pollutant concentrations in these samples is demanding and results in a high uncertainty of the measured concentrations, if the concentration is above the detection limit in the first place. In many monitoring programs most of the measured values were below the detection limit. This results in a high uncertainty if river loads were calculated from these data sets. The authors propose a different approach to gain stable concentration values for particle bound micro pollutants from river monitoring: A mixed sample of about 1000 L was pumped in a tank with a dirty-water pump. The sampling usually is done discharge dependant by using a gauge signal as input for the control unit. After the discharge event is over or the tank is fully filled, the suspended solids settle in the tank for 2 days. After this time a clear separation of water and solids can be shown. A sample (1 L) from the water phase and the total mass of the settled solids (about 10 L) are taken to the laboratory for analysis. While the micro pollutants can't hardly be detected in the water phase, the signal from the sediment is high above the detection limit, thus certain and very stable. From the pollutant concentration in the solid phase and the total tank volume the initial pollutant concentration in the sample can be calculated. If the concentration in the water phase is detectable, it can be used to correct the total load. This relatively low cost approach (less costs for analysis because of small sample number) allows to quantify the pollutant load, to derive dissolved-solid partition coefficients and to quantify the pollutant load in different particle size classes.
ERIC Educational Resources Information Center
Khan, Suhail Ahmed
2010-01-01
The self-concept is the sum of all your thoughts, feelings and belief about yourself. The self-concept may be positive or negative. This paper focuses on self-concepts of Secondary School Teachers and its relationship with their adjustment. The research was carried out in Aurangabad, Maharashtra on a sample of 50 teachers. Self-concept of teachers…
Property-Based Software Engineering Measurement
NASA Technical Reports Server (NTRS)
Briand, Lionel; Morasca, Sandro; Basili, Victor R.
1995-01-01
Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysis, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact, and rigorous, because it is based on precise mathematical concepts. This framework defines several important measurement concepts (size, length, complexity, cohesion, coupling). It is not intended to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalism and properties we introduce are convenient and intuitive. In addition, we have reviewed the literature on this subject and compared it with our work. This framework contributes constructively to a firmer theoretical ground of software measurement.
Multi-pack Disposal Concepts for Spent Fuel (Rev. 0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Hardin, Ernest; Matteo, Edward N.
2015-12-01
At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media (Hardin et al., 2012). Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are keptmore » open after emplacement for extended ventilation) have been limited to the Yucca Mountain License Application Design (CRWMS M&O, 1999). Thermal analysis showed that, if “enclosed” concepts are constrained by peak package/buffer temperature, waste package capacity is limited to 4 PWR assemblies (or 9-BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems (EnergySolution, 2015). This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less
Multi-Pack Disposal Concepts for Spent Fuel (Revision 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest; Matteo, Edward N.; Hadgu, Teklu
2016-01-01
At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media. Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are kept open after emplacement formore » extended ventilation) have been limited to the Yucca Mountain License Application Design. Thermal analysis showed that if “enclosed” concepts are constrained by peak package/buffer temperature, that waste package capacity is limited to 4 PWR assemblies (or 9 BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems. This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less
Property-Based Software Engineering Measurement
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.
1997-01-01
Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.
The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.
Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J
2018-07-01
This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Brock, Kim; Haase, Gerlinde; Rothacher, Gerhard; Cotton, Susan
2011-10-01
To compare the short-term effects of two physiotherapy approaches for improving ability to walk in different environments following stroke: (i) interventions based on the Bobath concept, in conjunction with task practice, compared to (ii) structured task practice alone. Randomized controlled trial. Two rehabilitation centres Participants: Twenty-six participants between four and 20 weeks post-stroke, able to walk with supervision indoors. Both groups received six one-hour physiotherapy sessions over a two-week period. One group received physiotherapy based on the Bobath concept, including one hour of structured task practice. The other group received six hours of structured task practice. The primary outcome was an adapted six-minute walk test, incorporating a step, ramp and uneven surface. Secondary measures were gait velocity and the Berg Balance Scale. Measures were assessed before and after the intervention period. Following the intervention, there was no significant difference in improvement between the two groups for the adapted six-minute walk test (89.9 (standard deviation (SD) 73.1) m Bobath versus 41 (40.7) m task practice, P = 0.07). However, walking velocity showed significantly greater increases in the Bobath group (26.2 (SD 17.2) m/min versus 9.9 (SD = 12.9) m/min, P = 0.01). No significant differences between groups were recorded for the Berg Balance Scale (P = 0.2). This pilot study indicates short-term benefit for using interventions based on the Bobath concept for improving walking velocity in people with stroke. A sample size of 32 participants per group is required for a definitive study.
Progressive Staging of Pilot Studies to Improve Phase III Trials for Motor Interventions
Dobkin, Bruce H.
2014-01-01
Based on the suboptimal research pathways that finally led to multicenter randomized clinical trials (MRCTs) of treadmill training with partial body weight support and of robotic assistive devices, strategically planned successive stages are proposed for pilot studies of novel rehabilitation interventions Stage 1, consideration-of-concept studies, drawn from animal experiments, theories, and observations, delineate the experimental intervention in a small convenience sample of participants, so the results must be interpreted with caution. Stage 2, development-of-concept pilots, should optimize the components of the intervention, settle on most appropriate outcome measures, and examine dose-response effects. A well-designed study that reveals no efficacy should be published to counterweight the confirmation bias of positive trials. Stage 3, demonstration-of-concept pilots, can build out from what has been learned to test at least 15 participants in each arm, using random assignment and blinded outcome measures. A control group should receive an active practice intervention aimed at the same primary outcome. A third arm could receive a substantially larger dose of the experimental therapy or a combinational intervention. If only 1 site performed this trial, a different investigative group should aim to reproduce positive outcomes based on the optimal dose of motor training. Stage 3 studies ought to suggest an effect size of 0.4 or higher, so that approximately 50 participants in each arm will be the number required to test for efficacy in a stage 4, proof-of-concept MRCT. By developing a consensus around acceptable and necessary practices for each stage, similar to CONSORT recommendations for the publication of phase III clinical trials, better quality pilot studies may move quickly into better designed and more successful MRCTs of experimental interventions. PMID:19240197
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Kristunas, Caroline A; Smith, Karen L; Gray, Laura J
2017-03-07
The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.
The Size and Scope of Collegiate Athletic Training Facilities and Staffing.
Gallucci, Andrew R; Petersen, Jeffrey C
2017-08-01
Athletic training facilities have been described in terms of general design concepts and from operational perspectives. However, the size and scope of athletic training facilities, along with staffing at different levels of intercollegiate competition, have not been quantified. To define the size and scope of athletic training facilities and staffing levels at various levels of intercollegiate competition. To determine if differences existed in facilities (eg, number of facilities, size of facilities) and staffing (eg, full time, part time) based on the level of intercollegiate competition. Cross-sectional study. Web-based survey. Athletic trainers (ATs) who were knowledgeable about the size and scope of athletic training programs. Athletic training facility size in square footage; the AT's overall facility satisfaction; athletic training facility component spaces, including satellite facilities, game-day facilities, offices, and storage areas; and staffing levels, including full-time ATs, part-time ATs, and undergraduate students. The survey was completed by 478 ATs (response rate = 38.7%) from all levels of competition. Sample means for facilities were 3124.7 ± 4425 ft 2 (290.3 ± 411 m 2 ) for the central athletic training facility, 1013 ± 1521 ft 2 (94 ± 141 m 2 ) for satellite athletic training facilities, 1272 ± 1334 ft 2 (118 ± 124 m 2 ) for game-day athletic training facilities, 388 ± 575 ft 2 (36 ± 53 m 2 ) for athletic training offices, and 424 ± 884 ft 2 (39 ± 82 m 2 ) for storage space. Sample staffing means were 3.8 ± 2.5 full-time ATs, 1.6 ± 2.5 part-time ATs, 25 ± 17.6 athletic training students, and 6.8 ± 7.2 work-study students. Division I schools had greater resources in multiple categories (P < .001). Differences among other levels of competition were not as well defined. Expansion or renovation of facilities in recent years was common, and almost half of ATs reported that upgrades have been approved for the near future. This study provides benchmark descriptive data on athletic training staffing and facilities. The results (1) suggest that the ATs were satisfied with their facilities and (2) highlight the differences in resources among competition levels.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
Extravehicular Activity Asteroid Exploration and Sample Collection Capability
NASA Technical Reports Server (NTRS)
Scoville, Zebulon; Sipila, Stephanie; Bowie, Jonathan
2014-01-01
NASA's Asteroid Redirect Crewed Mission (ARCM) is challenged with primary mission objectives of demonstrating deep space Extravehicular Activity (EVA) and tools, and obtaining asteroid samples to return to Earth for further study. Although the Modified Advanced Crew Escape Suit (MACES) is used for the EVAs, it has limited mobility which increases fatigue and decreases the crews' capability to perform EVA tasks. Furthermore, previous Shuttle and International Space Station (ISS) spacewalks have benefited from EVA interfaces which have been designed and manufactured on Earth. Rigid structurally mounted handrails, and tools with customized interfaces and restraints optimize EVA performance. For ARCM, some vehicle interfaces and tools can leverage heritage designs and experience. However, when the crew ventures onto an asteroid capture bag to explore the asteroid and collect rock samples, EVA complexity increases due to the uncertainty of the asteroid properties. The variability of rock size, shape and composition, as well as bunching of the fabric bag will complicate EVA translation, tool restraint and body stabilization. The unknown asteroid hardness and brittleness will complicate tool use. The rock surface will introduce added safety concerns for cut gloves and debris control. Feasible solutions to meet ARCM EVA objectives were identified using experience gained during Apollo, Shuttle, and ISS EVAs, terrestrial mountaineering practices, NASA Extreme Environment Mission Operations (NEEMO) 16 mission, and during Neutral Buoyancy Laboratory testing in the MACES suit. The proposed concept utilizes expandable booms and integrated features of the asteroid capture bag to position and restrain the crew at the asteroid worksite. These methods enable the capability to perform both finesse, and high load tasks necessary to collect samples for scientific characterization of the asteroid. This paper will explore the design trade space and options that were examined for EVA, the overall concept for the EVAs including translation paths and body restraint methods, potential tools used to extract the samples, design implications for the Asteroid Redirect Vehicle (ARV) for EVA, the results of early development testing of potential EVA tasks, and extensibility of the EVA architecture to NASA's exploration missions.
Design concepts for low-cost composite engine frames
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1983-01-01
Design concepts for low-cost, lightweight composite engine frames were applied to the design requirements for the frame of commercial, high-bypass turbine engines. The concepts consist of generic-type components and subcomponents that could be adapted for use in different locations in the engine and to different engine sizes. A variety of materials and manufacturing methods were assessed with a goal of having the lowest number of parts possible at the lowest possible cost. The evaluation of the design concepts resulted in the identification of a hybrid composite frame which would weigh about 70 percent of the state-of-the-art metal frame and cost would be about 60 percent.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
KEPLER Mission: development and overview
NASA Astrophysics Data System (ADS)
Borucki, William J.
2016-03-01
The Kepler Mission is a space observatory launched in 2009 by NASA to monitor 170 000 stars over a period of four years to determine the frequency of Earth-size and larger planets in and near the habitable zone of Sun-like stars, the size and orbital distributions of these planets, and the types of stars they orbit. Kepler is the tenth in the series of NASA Discovery Program missions that are competitively-selected, PI-directed, medium-cost missions. The Mission concept and various instrument prototypes were developed at the Ames Research Center over a period of 18 years starting in 1983. The development of techniques to do the 10 ppm photometry required for Mission success took years of experimentation, several workshops, and the exploration of many ‘blind alleys’ before the construction of the flight instrument. Beginning in 1992 at the start of the NASA Discovery Program, the Kepler Mission concept was proposed five times before its acceptance for mission development in 2001. During that period, the concept evolved from a photometer in an L2 orbit that monitored 6000 stars in a 50 sq deg field-of-view (FOV) to one that was in a heliocentric orbit that simultaneously monitored 170 000 stars with a 105 sq deg FOV. Analysis of the data to date has detected over 4600 planetary candidates which include several hundred Earth-size planetary candidates, over a thousand confirmed planets, and Earth-size planets in the habitable zone (HZ). These discoveries provide the information required for estimates of the frequency of planets in our galaxy. The Mission results show that most stars have planets, many of these planets are similar in size to the Earth, and that systems with several planets are common. Although planets in the HZ are common, many are substantially larger than Earth.
NASA Technical Reports Server (NTRS)
McGuire, Mary Kathleen
2011-01-01
NASA has been recently updating design reference missions for the human exploration of Mars and evaluating the technology investments required to do so. The first of these started in January 2007 and developed the Mars Design Reference Architecture 5.0 (DRA5). As part of DRA5, Thermal Protection System (TPS) sizing analysis was performed on a mid L/D rigid aeroshell undergoing a dual heat pulse (aerocapture and atmospheric entry) trajectory. The DRA5 TPS subteam determined that using traditional monolithic ablator systems would be mass expensive. They proposed a new dual-layer TPS concept utilizing an ablator atop a low thermal conductivity insulative substrate to address the issue. Using existing thermal response models for an ablator and insulative tile, preliminary hand analysis of the dual layer concept at a few key heating points indicated that the concept showed potential to reduce TPS masses and warranted further study. In FY09, the followon Entry, Descent and Landing Systems Analysis (EDL-SA) project continued by focusing on Exploration-class cargo or crewed missions requiring 10 to 50 metric tons of landed payload. The TPS subteam advanced the preliminary dual-layer TPS analysis by developing a new process and updated TPS sizing code to rapidly evaluate mass-optimized, full body sizing for a dual layer TPS that is capable of dual heat pulse performance. This paper describes the process and presents the results of the EDL-SA FY09 dual-layer TPS analyses on the rigid mid L/D aeroshell. Additionally, several trade studies were conducted with the sizing code to evaluate the impact of various design factors, assumptions and margins.
KEPLER Mission: development and overview.
Borucki, William J
2016-03-01
The Kepler Mission is a space observatory launched in 2009 by NASA to monitor 170,000 stars over a period of four years to determine the frequency of Earth-size and larger planets in and near the habitable zone of Sun-like stars, the size and orbital distributions of these planets, and the types of stars they orbit. Kepler is the tenth in the series of NASA Discovery Program missions that are competitively-selected, PI-directed, medium-cost missions. The Mission concept and various instrument prototypes were developed at the Ames Research Center over a period of 18 years starting in 1983. The development of techniques to do the 10 ppm photometry required for Mission success took years of experimentation, several workshops, and the exploration of many 'blind alleys' before the construction of the flight instrument. Beginning in 1992 at the start of the NASA Discovery Program, the Kepler Mission concept was proposed five times before its acceptance for mission development in 2001. During that period, the concept evolved from a photometer in an L2 orbit that monitored 6000 stars in a 50 sq deg field-of-view (FOV) to one that was in a heliocentric orbit that simultaneously monitored 170,000 stars with a 105 sq deg FOV. Analysis of the data to date has detected over 4600 planetary candidates which include several hundred Earth-size planetary candidates, over a thousand confirmed planets, and Earth-size planets in the habitable zone (HZ). These discoveries provide the information required for estimates of the frequency of planets in our galaxy. The Mission results show that most stars have planets, many of these planets are similar in size to the Earth, and that systems with several planets are common. Although planets in the HZ are common, many are substantially larger than Earth.
How Large Should a Statistical Sample Be?
ERIC Educational Resources Information Center
Menil, Violeta C.; Ye, Ruili
2012-01-01
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
Sample size, confidence, and contingency judgement.
Clément, Mélanie; Mercier, Pierre; Pastò, Luigi
2002-06-01
According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Science Data Center concepts for moderate-sized NASA missions
NASA Technical Reports Server (NTRS)
Price, R.; Han, D.; Pedelty, J.
1991-01-01
The paper describes the approaches taken by the NASA Science Data Operations Center to the concepts for two future NASA moderate-sized missions, the Orbiting Solar Laboratory (OSL) and the Tropical Rainfall Measuring Mission (TRMM). The OSL space science mission will be a free-flying spacecraft with a complement of science instruments, placed in a high-inclination, sun synchronous orbit to allow continuous study of the sun for extended periods. The TRMM is planned to be a free-flying satellite for measuring tropical rainfall and its variations. Both missions will produce 'standard' data products for the benefit of their communities, and both depend upon their own scientific community to provide algorithms for generating the standard data products.
NASA Technical Reports Server (NTRS)
Rehder, J. J.; Wurster, K. E.
1978-01-01
Techniques for sizing electrically or chemically propelled orbit transfer vehicles and analyzing fleet requirements are used in a comparative analysis of the two concepts for various levels of traffic to geosynchronous orbit. The vehicle masses, fuel requirements, and fleet sizes are determined and translated into launch vehicle payload requirements. Technology projections beyond normal growth are made and their effect on the comparative advantages of the concepts is determined. A preliminary cost analysis indicates that although electric propulsion greatly reduces launch vehicle requirements substantial improvements in the cost and reusability of power systems must occur to make an electrically propelled vehicle competitive.
The effects of an early intervention music curriculum on prereading/writing.
Register, D
2001-01-01
This study evaluated the effects of music sessions using a curriculum designed to enhance the prereading and writing skills of 25 children aged 4 to 5 years who were enrolled in Early Intervention and Exceptional Student Education programs. This study was a replication of the work of Standley and Hughes (1997) and utilized a larger sample size (n = 50) in order to evaluate the efficacy of a music curriculum designed specifically to teach prereading and writing skills versus one that focuses on all developmental areas. Both the experimental (n = 25) and control (n = 25) groups received two 30-minute sessions each week for an entire school year for a minimum of 60 sessions per group. The differentiating factors between the two groups were the structure and components of the musical activities. The fall sessions for the experimental group were focused primarily on writing skills while the spring sessions taught reading/book concepts. Music sessions for the control group were based purely on the thematic material, as determined by the classroom teacher with purposeful exclusion of all preliteracy concepts. All participants were pretested at the beginning of the school year and posttested before the school year ended. Overall, results demonstrated that music sessions significantly enhanced both groups' abilities to learn prewriting and print concepts. However, the experimental group showed significantly higher results on the logo identification posttest and the word recognition test. Implications for curriculum design and academic and social applications of music in Early Intervention programs are discussed.