Science.gov

Sample records for experimental design techniques

  1. Experimental-design techniques in reliability-growth assessment

    NASA Astrophysics Data System (ADS)

    Benski, H. C.; Cabau, Emmanuel

    Several recent statistical methods, including a Bayesian technique, have been proposed to detect the presence of significant effects in unreplicated factorials. It is recognized that these techniques were developed for s-normally distributed responses; and this may or may not be the case for times between failures. In fact, for homogeneous Poisson processes (HPPs), these times are exponentially distributed. Still, response data transformations can be applied to these times so that, at least approximately, these procedures can be used. It was therefore considered important to determine how well these different techniques performed in terms of power. The results of an extensive Monte Carlo simulation are presented in which the power of techniques is analyzed. The actual details of a fractional factorial design applied in the context of reliability growth are described. Finally, power comparison results are presented.

  2. Nonmedical influences on medical decision making: an experimental technique using videotapes, factorial design, and survey sampling.

    PubMed Central

    Feldman, H A; McKinlay, J B; Potter, D A; Freund, K M; Burns, R B; Moskowitz, M A; Kasten, L E

    1997-01-01

    OBJECTIVE: To study nonmedical influences on the doctor-patient interaction. A technique using simulated patients and "real" doctors is described. DATA SOURCES: A random sample of physicians, stratified on such characteristics as demographics, specialty, or experience, and selected from commercial and professional listings. STUDY DESIGN: A medical appointment is depicted on videotape by professional actors. The patient's presenting complaint (e.g., chest pain) allows a range of valid interpretation. Several alternative versions are taped, featuring the same script with patient-actors of different age, sex, race, or other characteristics. Fractional factorial design is used to select a balanced subset of patient characteristics, reducing costs without biasing the outcome. DATA COLLECTION: Each physician is shown one version of the videotape appointment and is asked to describe how he or she would diagnose or treat such a patient. PRINCIPAL FINDINGS: Two studies using this technique have been completed to date, one involving chest pain and dyspnea and the other involving breast cancer. The factorial design provided sufficient power, despite limited sample size, to demonstrate with statistical significance various influences of the experimental and stratification variables, including the patient's gender and age and the physician's experience. Persistent recruitment produced a high response rate, minimizing selection bias and enhancing validity. CONCLUSION: These techniques permit us to determine, with a degree of control unattainable in observational studies, whether medical decisions as described by actual physicians and drawn from a demographic or professional group of interest, are influenced by a prescribed set of nonmedical factors. PMID:9240285

  3. Synthesis of designed materials by laser-based direct metal deposition technique: Experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Qi, Huan

    Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an

  4. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  5. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  6. Axisymmetric and non-axisymmetric exhaust jet induced effects on a V/STOL vehicle design. Part 3: Experimental technique

    NASA Technical Reports Server (NTRS)

    Schnell, W. C.

    1982-01-01

    The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.

  7. Modern Experimental Techniques in Turbine Engine Testing

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Bencic, T. J.; Braunscheidel, E. P.

    1996-01-01

    The paper describes application of two modern experimental techniques, thin-film thermocouples and pressure sensitive paint, to measurement in turbine engine components. A growing trend of using computational codes in turbomachinery design and development requires experimental techniques to refocus from overall performance testing to acquisition of detailed data on flow and heat transfer physics to validate these codes for design applications. The discussed experimental techniques satisfy this shift in focus. Both techniques are nonintrusive in practical terms. The thin-film thermocouple technique improves accuracy of surface temperature and heat transfer measurements. The pressure sensitive paint technique supplies areal surface pressure data rather than discrete point values only. The paper summarizes our experience with these techniques and suggests improvements to ease the application of these techniques for future turbomachinery research and code verifications.

  8. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547

  9. Experimental techniques for multiphase flows

    NASA Astrophysics Data System (ADS)

    Powell, Robert L.

    2008-04-01

    This review discusses experimental techniques that provide an accurate spatial and temporal measurement of the fields used to describe multiphase systems for a wide range of concentrations, velocities, and chemical constituents. Five methods are discussed: magnetic resonance imaging (MRI), ultrasonic pulsed Doppler velocimetry (UPDV), electrical impedance tomography (EIT), x-ray radiography, and neutron radiography. All of the techniques are capable of measuring the distribution of solids in suspensions. The most versatile technique is MRI, which can be used for spatially resolved measurements of concentration, velocity, chemical constituents, and diffusivity. The ability to measure concentration allows for the study of sedimentation and shear-induced migration. One-dimensional and two-dimensional velocity profiles have been measured with suspensions, emulsions, and a range of other complex liquids. Chemical shift MRI can discriminate between different constituents in an emulsion where diffusivity measurements allow the particle size to be determined. UPDV is an alternative technique for velocity measurement. There are some limitations regarding the ability to map complex flow fields as a result of the attenuation of the ultrasonic wave in concentrated systems that have high viscosities or where multiple scattering effects may be present. When combined with measurements of the pressure drop, both MRI and UPDV can provide local values of viscosity in pipe flow. EIT is a low cost means of measuring concentration profiles and has been used to study shear-induced migration in pipe flow. Both x-ray and neutron radiographes are used to image structures in flowing suspensions, but both require highly specialized facilities.

  10. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  11. Model for Vaccine Design by Prediction of B-Epitopes of IEDB Given Perturbations in Peptide Sequence, In Vivo Process, Experimental Techniques, and Source or Host Organisms

    PubMed Central

    González-Díaz, Humberto; Pérez-Montoto, Lázaro G.; Ubeira, Florencio M.

    2014-01-01

    Perturbation methods add variation terms to a known experimental solution of one problem to approach a solution for a related problem without known exact solution. One problem of this type in immunology is the prediction of the possible action of epitope of one peptide after a perturbation or variation in the structure of a known peptide and/or other boundary conditions (host organism, biological process, and experimental assay). However, to the best of our knowledge, there are no reports of general-purpose perturbation models to solve this problem. In a recent work, we introduced a new quantitative structure-property relationship theory for the study of perturbations in complex biomolecular systems. In this work, we developed the first model able to classify more than 200,000 cases of perturbations with accuracy, sensitivity, and specificity >90% both in training and validation series. The perturbations include structural changes in >50000 peptides determined in experimental assays with boundary conditions involving >500 source organisms, >50 host organisms, >10 biological process, and >30 experimental techniques. The model may be useful for the prediction of new epitopes or the optimization of known peptides towards computational vaccine design. PMID:24741624

  12. Design and experimental demonstration of low-power CMOS magnetic cell manipulation platform using charge recycling technique

    NASA Astrophysics Data System (ADS)

    Niitsu, Kiichi; Yoshida, Kohei; Nakazato, Kazuo

    2016-03-01

    We present the world’s first charge-recycling-based low-power technique of complementary metal-oxide-semiconductor (CMOS) magnetic cell manipulation. CMOS magnetic cell manipulation associated with magnetic beads is a promissing tool for on-chip biomedical-analysis applications such as drug screening because CMOS can integrate control electronics and electro-chemical sensors. However, the conventional CMOS cell manipulation requires considerable power consumption. In this work, by concatenating multiple unit circuits and recycling electric charge among them, power consumption is reduced by a factor of the number of the concatenated unit circuits (1/N). For verifying the effectiveness, test chip was fabricated in a 0.6-µm CMOS. The chip successfully manipulates magnetic microbeads with achieving 49% power reduction (from 51 to 26.2 mW). Even considering the additional serial resistance of the concatenated inductors, nearly theoretical power reduction effect can be confirmed.

  13. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  14. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  15. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  16. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  17. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  18. Linear-dichroic infrared spectroscopy—Validation and experimental design of the new orientation technique of solid samples as suspension in nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Ivanova, B. B.; Simeonov, V. D.; Arnaudov, M. G.; Tsalev, D. L.

    2007-05-01

    A validation of the developed new orientation method of solid samples as suspension in nematic liquid crystal (NLC), applied in linear-dichroic infrared (IR-LD) spectroscopy has been carried out using a model system DL-isoleucine ( DL-isoleu). Accuracy, precision and the influence of the liquid crystal medium on peak positions and integral absorbances of guest molecules have been presented. Optimization of experimental conditions has been performed as well. An experimental design for quantitative evaluation of the impact of four input factors: the number of scans, the rubbing-out of KBr-pellets, the amount of studied compounds included in the liquid crystal medium and the ratios of Lorentzian to Gaussian peak functions in the curve fitting procedure on the spectroscopic signal at five different frequencies, indicating important specifities of the system has been studied.

  19. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims. PMID:16161223

  20. Teaching experimental design to biologists.

    PubMed

    Zolman, J F

    1999-12-01

    The teaching of research design and data analysis to our graduate students has been a persistent problem. A course is described in which students, early in their graduate training, obtain extensive practice in designing experiments and interpreting data. Lecture-discussions on the essentials of biostatistics are given, and then these essentials are repeatedly reviewed by illustrating their applications and misapplications in numerous research design problems. Students critique these designs and prepare similar problems for peer evaluation. In most problems the treatments are confounded by extraneous variables, proper controls may be absent, or data analysis may be incorrect. For each problem, students must decide whether the researchers' conclusions are valid and, if not, must identify a fatal experimental flaw. Students learn that an experiment is a well-conceived plan for data collection, analysis, and interpretation. They enjoy the interactive evaluations of research designs and appreciate the repetitive review of common flaws in different experiments. They also benefit from their practice in scientific writing and in critically evaluating their peers' designs. PMID:10644236

  1. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment. PMID:25541541

  2. Heat capacity measurements - Progress in experimental techniques

    NASA Astrophysics Data System (ADS)

    Lakshmikumar, S. T.; Gopal, E. S. R.

    1981-11-01

    The heat capacity of a substance is related to the structure and constitution of the material and its measurement is a standard technique of physical investigation. In this review, the classical methods are first analyzed briefly and their recent extensions are summarized. The merits and demerits of these methods are pointed out. The newer techniques such as the a.c. method, the relaxation method, the pulse methods, the laser flash calorimetry and other methods developed to extend the heat capacity measurements to newer classes of materials and to extreme conditions of sample geometry, pressure and temperature are comprehensively reviewed. Examples of recent work and details of the experimental systems are provided for each method. The introduction of automation in control systems for the monitoring of the experiments and for data processing is also discussed. Two hundred and eight references and 18 figures are used to illustrate the various techniques.

  3. Single-crystal nickel-based superalloys developed by numerical multi-criteria optimization techniques: design based on thermodynamic calculations and experimental validation

    NASA Astrophysics Data System (ADS)

    Rettig, Ralf; Ritter, Nils C.; Helmer, Harald E.; Neumeier, Steffen; Singer, Robert F.

    2015-04-01

    A method for finding the optimum alloy compositions considering a large number of property requirements and constraints by systematic exploration of large composition spaces is proposed. It is based on a numerical multi-criteria global optimization algorithm (multistart solver using Sequential Quadratic Programming), which delivers the exact optimum considering all constraints. The CALPHAD method is used to provide the thermodynamic equilibrium properties, and the creep strength of the alloys is predicted based on a qualitative numerical model considering the solid solution strengthening of the matrix by the elements Re, Mo and W and the optimum morphology and fraction of the γ‧-phase. The calculated alloy properties which are required as an input for the optimization algorithm are provided via very fast Kriging surrogate models. This greatly reduces the total calculation time of the optimization to the order of minutes on a personal computer. The capability of the multi-criteria optimization method developed was experimentally verified with two new single crystal superalloys. Their compositions were designed such that the content of expensive elements was reduced. One of the newly designed alloys, termed ERBO/13, is found to possess creep strength of only 14 K below CMSX-4 in the high-temperature/low-stress regime although it is a Re-free alloy.

  4. Design for reliability of BEoL and 3-D TSV structures - A joint effort of FEA and innovative experimental techniques

    NASA Astrophysics Data System (ADS)

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-01

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That's why the authors pushed combined simulative/experimental approaches to extract the Young's-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  5. Orbit determination error analysis and comparison of station-keeping costs for Lissajous and halo-type libration point orbits and sensitivity analysis using experimental design techniques

    NASA Technical Reports Server (NTRS)

    Gordon, Steven C.

    1993-01-01

    Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.

  6. Design for reliability of BEoL and 3-D TSV structures – A joint effort of FEA and innovative experimental techniques

    SciTech Connect

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-19

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That’s why the authors pushed combined simulative/experimental approaches to extract the Young’s-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  7. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  8. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  9. Design techniques for mutlivariable flight control systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Techniques which address the multi-input closely coupled nature of advanced flight control applications and digital implementation issues are described and illustrated through flight control examples. The techniques described seek to exploit the advantages of traditional techniques in treating conventional feedback control design specifications and the simplicity of modern approaches for multivariable control system design.

  10. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  11. Telecommunications Systems Design Techniques Handbook

    NASA Technical Reports Server (NTRS)

    Edelson, R. E. (Editor)

    1972-01-01

    The Deep Space Network (DSN) increasingly supports deep space missions sponsored and managed by organizations without long experience in DSN design and operation. The document is intended as a textbook for those DSN users inexperienced in the design and specification of a DSN-compatible spacecraft telecommunications system. For experienced DSN users, the document provides a reference source of telecommunication information which summarizes knowledge previously available only in a multitude of sources. Extensive references are quoted for those who wish to explore specific areas more deeply.

  12. Quasi experimental designs in pharmacist intervention research.

    PubMed

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  13. Experimental Investigation of Centrifugal Compressor Stabilization Techniques

    NASA Technical Reports Server (NTRS)

    Skoch, Gary J.

    2003-01-01

    Results from a series of experiments to investigate techniques for extending the stable flow range of a centrifugal compressor are reported. The research was conducted in a high-speed centrifugal compressor at the NASA Glenn Research Center. The stabilizing effect of steadily flowing air-streams injected into the vaneless region of a vane-island diffuser through the shroud surface is described. Parametric variations of injection angle, injection flow rate, number of injectors, injector spacing, and injection versus bleed were investigated for a range of impeller speeds and tip clearances. Both the compressor discharge and an external source were used for the injection air supply. The stabilizing effect of flow obstructions created by tubes that were inserted into the diffuser vaneless space through the shroud was also investigated. Tube immersion into the vaneless space was varied in the flow obstruction experiments. Results from testing done at impeller design speed and tip clearance are presented. Surge margin improved by 1.7 points using injection air that was supplied from within the compressor. Externally supplied injection air was used to return the compressor to stable operation after being throttled into surge. The tubes, which were capped to prevent mass flux, provided 9.3 points of additional surge margin over the baseline surge margin of 11.7 points.

  14. GCFR shielding design and supporting experimental programs

    SciTech Connect

    Perkins, R.G.; Hamilton, C.J.; Bartine, D.

    1980-05-01

    The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.

  15. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  16. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  17. Experimental Design for the Evaluation of Detection Techniques of Hidden Corrosion Beneath the Thermal Protective System of the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Kemmerer, Catherine C.; Jacoby, Joseph A.; Lomness, Janice K.; Hintze, Paul E.; Russell, Richard W.

    2007-01-01

    The detection of corrosion beneath Space Shuttle Orbiter thermal protective system is traditionally accomplished by removing the Reusable Surface Insulation tiles and performing a visual inspection of the aluminum substrate and corrosion protection system. This process is time consuming and has the potential to damage high cost tiles. To evaluate non-intrusive NDE methods, a Proof of Concept (PoC) experiment was designed and test panels were manufactured. The objective of the test plan was three-fold: establish the ability to detect corrosion hidden from view by tiles; determine the key factor affecting detectability; roughly quantify the detection threshold. The plan consisted of artificially inducing dimensionally controlled corrosion spots in two panels and rebonding tile over the spots to model the thermal protective system of the orbiter. The corrosion spot diameter ranged from 0.100" to 0.600" inches and the depth ranged from 0.003" to 0.020". One panel consisted of a complete factorial array of corrosion spots with and without tile coverage. The second panel consisted of randomized factorial points replicated and hidden by tile. Conventional methods such as ultrasonics, infrared, eddy current and microwave methods have shortcomings. Ultrasonics and IR cannot sufficiently penetrate the tiles, while eddy current and microwaves have inadequate resolution. As such, the panels were interrogated using Backscatter Radiography and Terahertz Imaging. The terahertz system successfully detected artificially induced corrosion spots under orbiter tile and functional testing is in-work in preparation for implementation.

  18. AN EXPERIMENTALLY ROBUST TECHNIQUE FOR HALO MEASUREMENT

    SciTech Connect

    Amundson, J.; Pellico, W.; Spentzouris, P.; Sullivan, T.; Spentzouris, Linda; /IIT, Chicago

    2006-03-01

    We propose a model-independent quantity, L/G, to characterize non-Gaussian tails in beam profiles observed with the Fermilab Booster Ion Profile Monitor. This quantity can be considered a measure of beam halo in the Booster. We use beam dynamics and detector simulations to demonstrate that L/G is superior to kurtosis as an experimental measurement of beam halo when realistic beam shapes, detector effects and uncertainties are taken into account. We include the rationale and method of calculation for L/G in addition to results of the experimental studies in the Booster where we show that L/G is a useful halo discriminator.

  19. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  20. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148

  1. Helioseismology in a bottle: an experimental technique

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Zimmerman, D. S.; Nataf, H.; Thorette, A.; Cabanes, S.; Roux, P.; Lekic, V.; Lathrop, D. P.

    2013-12-01

    Measurement of the differential rotation of the Sun's interior is one of the great achievements of helioseismology, providing important constraints for stellar physics. The technique relies on observing and analyzing rotationally-induced splittings of p-modes in the star. Here we demonstrate the first use of the technique in a laboratory setting. We apply it in a spherical cavity with a spinning central core (spherical Couette flow) to determine the azimuthal velocity of the air filling the cavity. We excite a number of acoustic resonances (analogous to p-modes in the Sun) using a speaker and record the response with an array of small microphones and/or accelerometers on the outer sphere. Many observed acoustic modes show rotationally-induced splittings which allow us to perform an inversion to determine the air's azimuthal velocity as a function of both radius and latitude. We validate the method by comparing the velocity field obtained through inversion against the velocity profile measured with a calibrated hot film anemometer. The technique has great potential for laboratory setups involving rotating fluids in axisymmetric cavities, and we hope it will be especially useful in liquid metals. Acoustic spectra showing rotationally induced splittings. Top figure is the spectra recorded from a microphone near the equator and lower figure from a microphone at high latitude. Color indicates core's rotation rate in Hz.

  2. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  3. Experimental and numerical techniques to assess catalysis

    NASA Astrophysics Data System (ADS)

    Herdrich, G.; Fertig, M.; Petkow, D.; Steinbeck, A.; Fasoulas, S.

    2012-01-01

    Catalytic heating can be a significant portion of the thermal load experienced by a body during re-entry. Under the auspices of the NATO Research and Technology Organisation Applied Vehicle Technologies Panel Task Group AVT-136 an assessment of the current state-of-the-art in the experimental characterization and numerical simulation of catalysis on high-temperature material surfaces has been conducted. This paper gives an extraction of the final report for this effort, showing the facilities and capabilities worldwide to assess catalysis data. A corresponding summary for the modeling activities is referenced in this article.

  4. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES

    SciTech Connect

    J. R. KAMM; ET AL

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i. e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. (13), which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  5. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities. PMID:23949665

  6. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  7. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469]. PMID:19786177

  8. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  9. New experimental techniques with the split Hopkinson pressure bar

    SciTech Connect

    Frantz, C.E.; Follansbee, P.S.; Wright, W.J.

    1984-01-01

    The split Hopkinson pressure bar or Kolsky bar has provided for many years a technique for performing compression tests at strain rates approaching 10/sup 4/ s/sup -1/. At these strain rates, the small dimensions possible in a compression test specimen give an advantage over a dynamic tensile test by allowing the stress within the specimen to equilibrate within the shortest possible time. The maximum strain rates possible with this technique are limited by stress wave propagation in the elastic pressure bars as well as in the deforming specimen. This subject is reviewed in this paper, and it is emphasized that a slowly rising excitation is preferred to one that rises steeply. Experimental techniques for pulse shaping and a numerical procedure for correcting the raw data for wave dispersion in the pressure bars are presented. For tests at elevated temperature a bar mover apparatus has been developed which effectively brings the cold pressure bars into contact with the specimen, which is heated with a specially designed furnace, shortly before the pressure wave arrives. This procedure has been used successfully in tests at temperatures as high as 1000/sup 0/C.

  10. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  11. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  12. Austin chalk stimulation techniques and design

    SciTech Connect

    Parker, C.D.; Weber, D.; Garza, D.; Swaner, S.

    1982-01-01

    This study presents design completion techniques being used to stimulate the Austin Chalk Formation in the Giddings field and Gonzales County, Texas. As background information, a history of the Giddings field and development of the Austin Chalk is discussed. The main purpose is to consider factors affecting fracture treatment design, including fracture height, pump rates, types of fracturing fluids, proppant concentrations, and leak-off controls. This is to insure effective and successful stimulation treatment. Possible alternative design considerations for future fracture treatments also are discussed.

  13. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  14. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  15. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  16. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  17. A Novel Technique for Experimental Flow Visualization of Mechanical Valves.

    PubMed

    Huang Zhang, Pablo S; Dalal, Alex R; Kresh, J Yasha; Laub, Glenn W

    2016-01-01

    The geometry of the hinge region in mechanical heart valves has been postulated to play an important role in the development of thromboembolic events (TEs). This study describes a novel technique developed to visualize washout characteristics in mechanical valve hinge areas. A dairy-based colloidal suspension (DBCS) was used as a high-contrast tracer. It was introduced directly into the hinge-containing sections of two commercially available valves mounted in laser-milled fluidic channels and subsequently washed out at several flow rates. Time-lapse images were analyzed to determine the average washout rate and generate intensity topography maps of the DBCS clearance. As flow increased, washout improved and clearance times were shorter in all cases. Significantly different washout rate time constants were observed between valves, average >40% faster clearance (p < 0.01). The topographic maps revealed that each valve had a characteristic pattern of washout. The technique proved reproducible with a maximum recorded standard error of mean (SEM) of ±3.9. Although the experimental washout dynamics have yet to be correlated with in vivo visualization studies, the methodology may help identify key flow features influencing TEs. This visualization methodology can be a useful tool to help evaluate stagnation zones in new and existing heart valve hinge designs. PMID:26554553

  18. An Experimental Study for Effectiveness of Super-Learning Technique at Elementary Level in Pakistan

    ERIC Educational Resources Information Center

    Shafqat, Hussain; Muhammad, Sarwar; Imran, Yousaf; Naemullah; Inamullah

    2010-01-01

    The objective of the study was to experience the effectiveness of super-learning technique of teaching at elementary level. The study was conducted with 8th grade students at a public sector school. Pre-test and post-test control group designs were used. Experimental and control groups were formed randomly, the experimental group (N = 62),…

  19. Rational Experimental Design for Electrical Resistivity Imaging

    NASA Astrophysics Data System (ADS)

    Mitchell, V.; Pidlisecky, A.; Knight, R.

    2008-12-01

    Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the

  20. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  1. Simulation as an aid to experimental design

    SciTech Connect

    Frazer, J.W.; Balaban, D.J.; Wang, J.L.

    1983-05-01

    A simulator of chemical reactions can aid the scientist in the design of experimentation. They are of great value when studying enzymatic kinetic reactions. One such simulator is a numerical ordinary differential equation solver which uses interactive graphics to provide the user with the capability to simulate an extremely wide range of enzyme reaction conditions for many types of single substrate reactions. The concentration vs. time profiles of any subset or all nine states of a complex reaction can be displayed with and without simulated instrumental noise. Thus the user can estimate the practicality of any proposed experimentation given known instrumental noise. The experimenter can readily determine which state provides the most information related to the proposed kinetic parameters and mechanism. A general discussion of the program including the nondimensionalization of the set of differential equations is included. Finally, several simulation examples are shown and the results discussed.

  2. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  3. Parameter estimation and optimal experimental design.

    PubMed

    Banga, Julio R; Balsa-Canto, Eva

    2008-01-01

    Mathematical models are central in systems biology and provide new ways to understand the function of biological systems, helping in the generation of novel and testable hypotheses, and supporting a rational framework for possible ways of intervention, like in e.g. genetic engineering, drug development or treatment of diseases. Since the amount and quality of experimental 'omics' data continue to increase rapidly, there is great need for methods for proper model building which can handle this complexity. In the present chapter we review two key steps of the model building process, namely parameter estimation (model calibration) and optimal experimental design. Parameter estimation aims to find the unknown parameters of the model which give the best fit to a set of experimental data. Optimal experimental design aims to devise the dynamic experiments which provide the maximum information content for subsequent non-linear model identification, estimation and/or discrimination. We place emphasis on the need for robust global optimization methods for proper solution of these problems, and we present a motivating example considering a cell signalling model. PMID:18793133

  4. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  5. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  6. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  7. Pipelining and dataflow techniques for designing supercomputers

    SciTech Connect

    Su, S.P.

    1982-01-01

    Extensive research has been conducted over the last two decades in developing supercomputers to meet the demand of high computational performance. This thesis investigates some pipelining and dataflow techniques for designing supercomputers. In the pipelining area, new techniques are developed for scheduling vector instructions in a multi-pipeline supercomputer and for constructing VLSI matrix arithmetic pipelines for large-scale matrix computations. In the dataflow area, a new approach is proposed to dispatch high-level functions for dependence-driven computations. A parallel task scheduling model is proposed for multi-pipeline vector supercomputers. This model can be applied to explore maximal concurrencies in vector supercomputers with a structure generalized from the CRAY-1, CYBER-205, and TI-ASC. The optimization problem of simultaneously scheduling multiple pipelines is proved to be MP-complete. Thus, heuristic scheduling algorithms for some restricted classes of vector task systems are developed. Nearly optimal performance can be achieved with the proposed parallel pipeline scheduling method. Simulation results on randomly generated task systems are presented to verify the analytical performance bounds. For dependence-driven computations, a dataflow controller is used to perform run-time scheduling of compound functions. The scheduling problem is shown to be NP-complete. Several heuristic scheduling strategies are proposed based on the time and resource demands of compound functions.

  8. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  9. Automatic Molecular Design using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

  10. Nonlinear potential analysis techniques for supersonic-hypersonic aerodynamic design

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes.

  11. An Experimental Investigation of a Technique for Predicting Gains from a Special Reading Program.

    ERIC Educational Resources Information Center

    Gill, Patrick Ralston

    This study was an experimental investigation designed to ascertain the effectiveness of a technique for predicting student success in a special reading program. The disparity between a student's score on a reading test taken silently and his score on an equivalent form which was read orally by the investigator as the student read it silently was…

  12. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  13. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  14. Computational procedures for optimal experimental design in biological systems.

    PubMed

    Balsa-Canto, E; Alonso, A A; Banga, J R

    2008-07-01

    Mathematical models of complex biological systems, such as metabolic or cell-signalling pathways, usually consist of sets of nonlinear ordinary differential equations which depend on several non-measurable parameters that can be hopefully estimated by fitting the model to experimental data. However, the success of this fitting is largely conditioned by the quantity and quality of data. Optimal experimental design (OED) aims to design the scheme of actuations and measurements which will result in data sets with the maximum amount and/or quality of information for the subsequent model calibration. New methods and computational procedures for OED in the context of biological systems are presented. The OED problem is formulated as a general dynamic optimisation problem where the time-dependent stimuli profiles, the location of sampling times, the duration of the experiments and the initial conditions are regarded as design variables. Its solution is approached using the control vector parameterisation method. Since the resultant nonlinear optimisation problem is in most of the cases non-convex, the use of a robust global nonlinear programming solver is proposed. For the sake of comparing among different experimental schemes, a Monte-Carlo-based identifiability analysis is then suggested. The applicability and advantages of the proposed techniques are illustrated by considering an example related to a cell-signalling pathway. PMID:18681746

  15. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months. PMID:19445253

  16. Experimental investigation of slope flows via image analysis techniques

    NASA Astrophysics Data System (ADS)

    Moroni, Monica; Giorgilli, Marco; Cenedese, Antonio

    2014-02-01

    A vessel filled with distilled water is used to simulate the local circulation in the surroundings of an urban area that is situated in a mountain valley. The purpose of this study is to establish if the experimental setup is suitable for the investigation of katabatic and anabatic flows and their interaction with an urban heat island. Flow fields are derived by means of Feature Tracking and temperature fields are directly measured with thermocouples. The technique employed allows obtaining a high spatio-temporal resolution, providing robust statistics for the characterization of the fluid-dynamic field. General qualitative comparisons are made with expectations from analytical models. It appeared that the experimental setup as used in this study can be used for reproducing the phenomena occurring in the atmospheric boundary layer.

  17. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  18. Video/Computer Techniques for Static and Dynamic Experimental Mechanics

    NASA Astrophysics Data System (ADS)

    Maddux, Gene E.

    1987-09-01

    Recent advances in video camera and processing technology, coupled with the development of relatively inexpensive but powerful mini- and micro-computers are providing new capabilities for the experimentalist. This paper will present an overview of current areas of application and an insight into the selection of video/computer systems. The application of optical techniques for most experimental mechanics efforts involves the generation of fringe patterns that can be related to the response of an object to some loading condition. The data reduction process may be characterized as a search for fringe position information. These techniques include methods such as holographic interferometry, speckle metrology, moire, and photoelasticity. Although considerable effort has been expended in developing specialized techniques to convert these patterns to useful engineering data, there are particular advantages to the video approach. Other optical techniques are used which do not produce fringe patterns. Among these is a relatively new area of video application; that of determining the time-history of the response of a structure to dynamic excitation. In particular, these systems have been used to perform modal surveys of large, flexible space structures which make the use of conventional test instrumentation difficult, if not impossible. Video recordings of discrete targets distributed on a vibrating structure can be processed to obtain displacement, velocity, and acceleration data.

  19. An infrared technique for evaluating turbine airfoil cooling designs

    SciTech Connect

    Sweeney, P.C.; Rhodes, J.F.

    2000-01-01

    An experimental approach is used to evaluate turbine airfoil cooling designs for advanced gas turbine engine applications by incorporating double-wall film-cooled design features into large-scale flat plate specimens. An infrared (IR) imaging system is used to make detailed, two-dimensional steady-state measurements of flat plate surface temperature with spatial resolution on the order of 0.4 mm. The technique employs a cooled zinc selenide window transparent to infrared radiation and calibrates the IR temperature readings to reference thermocouples embedded in each specimen, yielding a surface temperature measurement accuracy of {+-} 4 C. With minimal thermocouple installation required, the flat plate/IR approach is cost effective, essentially nonintrusive, and produces abundant results quickly. Design concepts can proceed from art to part to data in a manner consistent with aggressive development schedules. The infrared technique is demonstrated here by considering the effect of film hole injection angle for a staggered array of film cooling holes integrated with a highly effective internal cooling pattern. Heated free stream air and room temperature cooling air are used to produce a nominal temperature ratio of 2 over a range of blowing ratios from 0.7 to 1.5. Results were obtained at hole angles of 90 and 30 deg for two different hole spacings and are presented in terms of overall cooling effectiveness.

  20. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  1. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  2. Circular machine design techniques and tools

    SciTech Connect

    Servranckx, R.V.; Brown, K.L.

    1986-04-01

    Some of the basic optics principles involved in the design of circular accelerators such as Alternating Gradient Synchrotrons, Storage and Collision Rings, and Pulse Stretcher Rings are outlined. Typical problems facing a designer are defined, and the main references and computational tools are reviewed that are presently available. Two particular classes of problems that occur typically in accelerator design are listed - global value problems, which affect the control of parameters which are characteristic of the complete closed circular machine, and local value problems. Basic mathematical formulae are given that are considered useful for a first draft of a design. The basic optics building blocks that can be used to formulate an initial machine design are introduced, giving only the elementary properties and transfer matrices only in one transverse plane. Solutions are presented for some first-order and second-order design problems. (LEW)

  3. Experimental Technique for Studying Aerosols of Lyophilized Bacteria

    PubMed Central

    Cox, Christopher S.; Derr, John S.; Flurie, Eugene G.; Roderick, Roger C.

    1970-01-01

    An experimental technique is presented for studying aerosols generated from lyophilized bacteria by using Escherichia coli B, Bacillus subtilis var. niger, Enterobacter aerogenes, and Pasteurella tularensis. An aerosol generator capable of creating fine particle aerosols of small quantities (10 mg) of lyophilized powder under controlled conditions of exposure to the atmosphere is described. The physical properties of the aerosols are investigated as to the distribution of number of aerosol particles with particle size as well as to the distribution of number of bacteria with particle size. Biologically unstable vegetative cells were quantitated physically by using 14C and Europium chelate stain as tracers, whereas the stable heat-shocked B. subtilis spores were assayed biologically. The physical persistence of the lyophilized B. subtilis aerosol is investigated as a function of size of spore-containing particles. The experimental result that physical persistence of the aerosol in a closed aerosol chamber increases as particle size is decreased is satisfactorily explained on the bases of electrostatic, gravitational, inertial, and diffusion forces operating to remove particles from the particular aerosol system. The net effect of these various forces is to provide, after a short time interval in the system (about 2 min), an aerosol of fine particles with enhanced physical stability. The dependence of physical stability of the aerosol on the species of organism and the nature of the suspending medium for lyophilization is indicated. Also, limitations and general applicability of both the technique and results are discussed. PMID:4992657

  4. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  5. Comparison of deaerator performance using experimental and numerical techniques

    NASA Astrophysics Data System (ADS)

    Majji, Sri Harsha

    Deaerator is a component of integrated drive generator (IDG), which is used to separate air from oil. Integrated drive generator is the main power generation unit used in aircrafts to generate electric-power and must be cooled to give maximum efficiency. Mob Jet Oil II is used in these IDGs as a lubricant and coolant. So, in order to get high-quality oil, a deaerator is used to remove trapped air from this Mob Jet Oil II using the centrifugal principle. The reason for entrapment of air may be due to operation of vacuum and high-pressure pumps. In this study, 75/90 IDG generic and A320 classic deaerator performance evaluation was done based on both experimental and numerical techniques. Experimental data was collected from deaerator test rig and numerical data was attained using CFD simulations (software used for CFD simulation is ANSYS CFX). Both experimental and numerical results were compared and also deaerator 75/90 generic and A320 classic was compared in this study. A parametric study on deaerators flow separation and inner geometry was also done in this study. This work also includes a comparison study of different multiphase models and different meshes applied on deaerator numerical test methodology.

  6. Implementation of high throughput experimentation techniques for kinetic reaction testing.

    PubMed

    Nagy, Anton J

    2012-02-01

    Successful implementation of High throughput Experimentation (EE) tools has resulted in their increased acceptance as essential tools in chemical, petrochemical and polymer R&D laboratories. This article provides a number of concrete examples of EE systems, which have been designed and successfully implemented in studies, which focus on deriving reaction kinetic data. The implementation of high throughput EE tools for performing kinetic studies of both catalytic and non-catalytic systems results in a significantly faster acquisition of high-quality kinetic modeling data, required to quantitatively predict the behavior of complex, multistep reactions. PMID:21902639

  7. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  8. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  9. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  10. [Early Diagnosis of Osteoarthritis: Clinical Reality and Promising Experimental Techniques].

    PubMed

    Arnscheidt, C; Meder, A; Rolauffs, B

    2016-06-01

    It is considered that the structural damage in early osteoarthritis (OA) is potentially reversible. It is therefore particularly important for orthopaedic and trauma surgery to develop strategies and technologies for diagnosing early OA processes. This review presents 3 case reports to illustrate the current clinical diagnostic procedure for OA. Experimental techniques with translational character are discussed in the context of the detection of early degenerative processes relevant to OA. Non-invasive imaging methods such as quantitative MRI, ultrasound, optical coherence tomography (OCT), scintigraphy and diffraction-enhanced synchrotron imaging (DEI), as well as biochemical methods and proteomics, are considered. Early detection of OA is reviewed with minimally invasive techniques, such as arthroscopy, as well as the combination of arthroscopic techniques with indentation, spectrometry, and multiphoton microscopy. In addition, a brief summary of macroscopic and histologic scores is presented. Finally, the spatial organisation of joint surface chondrocytes as an image-based biomarker is used to illustrate an early OA detection strategy that focusses on early changes in tissue architecture potentially prior to damage. In summary, multiple translational techniques are able to detect early OA processes but we do not know whether they truly represent the initial events. Moreover, at this point it is difficult to judge the future clinical relevance of these procedures and to compare their efficacy, as there have been comparative studies. However, the expected gain in knowledge will hopefully help us top attain a more comprehensive understanding of early OA and to develop novel methods for its early diagnosis, therapy, and prevention. Overall, the clinical diagnosis of early OA remains one of the greatest challenges of our field. We still face uncharted territory. PMID:26894867

  11. EXPERIMENTAL DESIGN: STATISTICAL CONSIDERATIONS AND ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this book chapter, information on how field experiments in invertebrate pathology are designed and the data collected, analyzed, and interpreted is presented. The practical and statistical issues that need to be considered and the rationale and assumptions behind different designs or procedures ...

  12. Nonlinear potential analysis techniques for supersonic-hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    Clever, W. C.; Shankar, V.

    1983-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical pilot codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one. Case computational times of a minute were achieved for practical aircraft arrangements.

  13. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Yudilevitch, Gil; Levine, William S.

    1994-01-01

    Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.

  14. Advanced experimental techniques for transonic wind tunnels - Final lecture

    NASA Technical Reports Server (NTRS)

    Kilgore, Robert A.

    1987-01-01

    A philosophy of experimental techniques is presented, suggesting that in order to be successful, one should like what one does, have the right tools, stick to the job, avoid diversions, work hard, interact with people, be informed, keep it simple, be self sufficient, and strive for perfection. Sources of information, such as bibliographies, newsletters, technical reports, and technical contacts and meetings are recommended. It is pointed out that adaptive-wall test sections eliminate or reduce wall interference effects, and magnetic suspension and balance systems eliminate support-interference effects, while the problem of flow quality remains with all wind tunnels. It is predicted that in the future it will be possible to obtain wind tunnel results at the proper Reynolds number, and the effects of flow unsteadiness, wall interference, and support interference will be eliminated or greatly reduced.

  15. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  16. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  17. A Novel Experimental Technique to Simulate Pillar Burst in Laboratory

    NASA Astrophysics Data System (ADS)

    He, M. C.; Zhao, F.; Cai, M.; Du, S.

    2015-09-01

    Pillar burst is one type of rockburst that occurs in underground mines. Simulating the stress change and obtaining insight into the pillar burst phenomenon under laboratory conditions are essential for studying the rock behavior during pillar burst in situ. To study the failure mechanism, a novel experimental technique was proposed and a series of tests were conducted on some granite specimens using a true-triaxial strainburst test system. Acoustic emission (AE) sensors were used to monitor the rock fracturing process. The damage evolution process was investigated using techniques such as macro and micro fracture characteristics observation, AE energy evolution, and b value analysis and fractal dimension analysis of cracks on fragments. The obtained results indicate that stepped loading and unloading simulated the pillar burst phenomenon well. Four deformation stages are divided as initial stress state, unloading step I, unloading step II, and final burst. It is observed that AE energy has a sharp increase at the initial stress state, accumulates slowly at unloading steps I and II, and increases dramatically at peak stress. Meanwhile, the mean b values fluctuate around 3.50 for the first three deformation stages and then decrease to 2.86 at the final stage, indicating the generation of a large amount of macro fractures. Before the test, the fractal dimension values are discrete and mainly vary between 1.10 and 1.25, whereas after failure the values concentrate around 1.25-1.35.

  18. Experimental techniques for studying the structure of foams and froths.

    PubMed

    Pugh, R J

    2005-06-30

    Several techniques are described in this review to study the structure and the stability of froths and foams. Image analysis proved useful for detecting structure changes in 2-D foams and has enabled the drainage process and the gradients in bubble size distribution to be determined. However, studies on 3-D foams require more complex techniques such as Multiple-Light Scattering Methods, Microphones and Optical Tomography. Under dynamic foaming conditions, the Foam Scan Column enables the water content of foams to be determined by conductivity analysis. It is clear that the same factors, which play a role in foam stability (film thickness, elasticity, etc.) also have a decisive influence on the stability of isolated froth or foam films. Therefore, the experimental thin film balance (developed by the Bulgarian Researchers) to study thinning of microfilms formed by a concave liquid drop suspended in a short vertical capillary tube has proved useful. Direct measurement of the thickness of the aqueous microfilm is determined by a micro-reflectance method and can give fundamental information on drainage and thin film stability. It is also important to consider the influence of the mineral particles on the stability of the froth and it have been shown that particles of well defined size and hydrophobicity can be introduced into the thin film enabling stabilization/destabilization mechanisms to be proposed. It has also been shown that the dynamic and static stability can be increased by a reduction in particle size and an increase in particle concentration. PMID:15913531

  19. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  20. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  1. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  2. Creative Conceptual Design Based on Evolutionary DNA Computing Technique

    NASA Astrophysics Data System (ADS)

    Liu, Xiyu; Liu, Hong; Zheng, Yangyang

    Creative conceptual design is an important area in computer aided innovation. Typical design methodology includes exploration and optimization by evolutionary techniques such as EC and swarm intelligence. Although there are many proposed algorithms and applications for creative design by these techniques, the computing models are implemented mostly by traditional von Neumann’s architecture. On the other hand, the possibility of using DNA as a computing technique arouses wide interests in recent years with huge built-in parallel computing nature and ability to solve NP complete problems. This new computing technique is performed by biological operations on DNA molecules rather than chips. The purpose of this paper is to propose a simulated evolutionary DNA computing model and integrate DNA computing with creative conceptual design. The proposed technique will apply for large scale, high parallel design problems potentially.

  3. Experimental measurements of the thermal conductivity of ash deposits: Part 1. Measurement technique

    SciTech Connect

    A. L. Robinson; S. G. Buckley; N. Yang; L. L. Baxter

    2000-04-01

    This paper describes a technique developed to make in situ, time-resolved measurements of the effective thermal conductivity of ash deposits formed under conditions that closely replicate those found in the convective pass of a commercial boiler. Since ash deposit thermal conductivity is thought to be strongly dependent on deposit microstructure, the technique is designed to minimize the disturbance of the natural deposit microstructure. Traditional techniques for measuring deposit thermal conductivity generally do not preserve the sample microstructure. Experiments are described that demonstrate the technique, quantify experimental uncertainty, and determine the thermal conductivity of highly porous, unsintered deposits. The average measured conductivity of loose, unsintered deposits is 0.14 {+-} 0.03 W/(m K), approximately midway between rational theoretical limits for deposit thermal conductivity.

  4. Verification of Experimental Techniques for Flow Surface Determination

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.

    1996-01-01

    The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).

  5. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  6. Experimental design for improved ceramic processing, emphasizing the Taguchi Method

    SciTech Connect

    Weiser, M.W. . Mechanical Engineering Dept.); Fong, K.B. )

    1993-12-01

    Ceramic processing often requires substantial experimentation to produce acceptable product quality and performance. This is a consequence of ceramic processes depending upon a multitude of factors, some of which can be controlled and others that are beyond the control of the manufacturer. Statistical design of experiments is a procedure that allows quick, economical, and accurate evaluation of processes and products that depend upon several variables. Designed experiments are sets of tests in which the variables are adjusted methodically. A well-designed experiment yields unambiguous results at minimal cost. A poorly designed experiment may reveal little information of value even with complex analysis, wasting valuable time and resources. This article will review the most common experimental designs. This will include both nonstatistical designs and the much more powerful statistical experimental designs. The Taguchi Method developed by Grenichi Taguchi will be discussed in some detail. The Taguchi method, based upon fractional factorial experiments, is a powerful tool for optimizing product and process performance.

  7. Vibration control of piezoelectric smart structures based on system identification technique: Numerical simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Dong, Xing-Jian; Meng, Guang; Peng, Juan-Chun

    2006-11-01

    The aim of this study is to investigate the efficiency of a system identification technique known as observer/Kalman filter identification (OKID) technique in the numerical simulation and experimental study of active vibration control of piezoelectric smart structures. Based on the structure responses determined by finite element method, an explicit state space model of the equivalent linear system is developed by employing OKID approach. The linear quadratic Gaussian (LQG) algorithm is employed for controller design. The control law is then incorporated into the ANSYS finite element model to perform closed loop simulations. Therefore, the control law performance can be evaluated in the context of a finite element environment. Furthermore, a complete active vibration control system comprising the cantilever plate, the piezoelectric actuators, the accelerometers and the digital signal processor (DSP) board is set up to conduct the experimental investigation. A state space model characterizing the dynamics of the physical system is developed from experimental results using OKID approach for the purpose of control law design. The controller is then implemented by using a floating point TMS320VC33 DSP. Numerical examples by employing the proposed numerical simulation method, together with the experimental results obtained by using the active vibration control system, have demonstrated the validity and efficiency of OKID method in application of active vibration control of piezoelectric smart structures.

  8. Experimental Design for Composite Face Transplantation.

    PubMed

    Park, Jihoon; Yim, Sangjun; Eun, Seok-Chan

    2016-06-01

    Face allotransplantation represents a novel frontier in complex human facial defect reconstruction. To develop more refined surgical techniques and yield fine results, it is first imperative to make a suitable animal model. The development of a composite facial allograft model in swine is more appealing: the facial anatomy, including facial nerve and vascular anatomy, is similar to that of humans. Two operative teams performed simultaneously, one assigned to harvest the donor and the other to prepare the recipient in efforts to shorten operative time. The flap was harvested with the common carotid artery and external jugular vein, and it was transferred to the recipient. After insetting the maxilla, mandible, muscles, and skins, the anastomosis of the external jugular vein, external carotid artery, and facial nerve were performed. The total mean time of transplantation was 7 hours, and most allografts survived without vascular problems. The authors documented that this model is well qualified to be used as a standard transplantation training model and future research work, in every aspect. PMID:27244198

  9. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  10. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall. PMID:24055597

  11. An investigation of radiometer design using digital processing techniques

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.

    1981-01-01

    The use of digital signal processing techniques in Dicke switching radiometer design was investigated. The general approach was to develop an analytical model of the existing analog radiometer and identify factors which adversly affect its performance. A digital processor was then proposed to verify the feasibility of using digital techniques to minimize these adverse effects and improve the radiometer performance. Analysis and preliminary test results comparing the digital and analog processing approaches in radiometers design were analyzed.

  12. Advanced airfoil design empirically based transonic aircraft drag buildup technique

    NASA Technical Reports Server (NTRS)

    Morrison, W. D., Jr.

    1976-01-01

    To systematically investigate the potential of advanced airfoils in advance preliminary design studies, empirical relationships were derived, based on available wind tunnel test data, through which total drag is determined recognizing all major aircraft geometric variables. This technique recognizes a single design lift coefficient and Mach number for each aircraft. Using this technique drag polars are derived for all Mach numbers up to MDesign + 0.05 and lift coefficients -0.40 to +0.20 from CLDesign.

  13. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  14. Evaluation with an Experimental Design: The Emergency School Assistance Program.

    ERIC Educational Resources Information Center

    Crain, Robert L.; York, Robert L.

    The Evaluation of the Emergency School Assistance Program (ESAP) for the 1971-72 school year is the first application of full-blown experimental design with randomized experimental and control cases in a federal evaluation of a large scale program. It is also one of the very few evaluations which has shown that federal programs can raise tested…

  15. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  16. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  17. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  18. Experimental Techniques Verified for Determining Yield and Flow Surfaces

    NASA Technical Reports Server (NTRS)

    Lerch, Brad A.; Ellis, Rod; Lissenden, Cliff J.

    1998-01-01

    Structural components in aircraft engines are subjected to multiaxial loads when in service. For such components, life prediction methodologies are dependent on the accuracy of the constitutive models that determine the elastic and inelastic portions of a loading cycle. A threshold surface (such as a yield surface) is customarily used to differentiate between reversible and irreversible flow. For elastoplastic materials, a yield surface can be used to delimit the elastic region in a given stress space. The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory, but at elevated temperatures, material response can be highly time dependent. Thus, viscoplastic theories have been developed to account for this time dependency. Since the key to many of these theories is experimental validation, the objective of this work (refs. 1 and 2) at the NASA Lewis Research Center was to verify that current laboratory techniques and equipment are sufficient to determine flow surfaces at elevated temperatures. By probing many times in the axial-torsional stress space, we could define the yield and flow surfaces. A small offset definition of yield (10 me) was used to delineate the boundary between reversible and irreversible behavior so that the material state remained essentially unchanged and multiple probes could be done on the same specimen. The strain was measured with an off-the-shelf multiaxial extensometer that could measure the axial and torsional strains over a wide range of temperatures. The accuracy and resolution of this extensometer was verified by comparing its data with strain gauge data at room temperature. The extensometer was found to have sufficient resolution for these experiments. In addition, the amount of crosstalk (i.e., the accumulation of apparent strain in one direction when strain in the other direction is applied) was found to be negligible. Tubular specimens were induction heated to determine the flow

  19. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  20. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  1. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  2. Designing modulators of monoamine transporters using virtual screening techniques

    PubMed Central

    Mortensen, Ole V.; Kortagere, Sandhya

    2015-01-01

    The plasma-membrane monoamine transporters (MATs), including the serotonin (SERT), norepinephrine (NET) and dopamine (DAT) transporters, serve a pivotal role in limiting monoamine-mediated neurotransmission through the reuptake of their respective monoamine neurotransmitters. The transporters are the main target of clinically used psychostimulants and antidepressants. Despite the availability of several potent and selective MAT substrates and inhibitors the continuing need for therapeutic drugs to treat brain disorders involving aberrant monoamine signaling provides a compelling reason to identify novel ways of targeting and modulating the MATs. Designing novel modulators of MAT function have been limited by the lack of three dimensional structure information of the individual MATs. However, crystal structures of LeuT, a bacterial homolog of MATs, in a substrate-bound occluded, substrate-free outward-open, and an apo inward-open state and also with competitive and non-competitive inhibitors have been determined. In addition, several structures of the Drosophila DAT have also been resolved. Together with computational modeling and experimental data gathered over the past decade, these structures have dramatically advanced our understanding of several aspects of SERT, NET, and DAT transporter function, including some of the molecular determinants of ligand interaction at orthosteric substrate and inhibitor binding pockets. In addition progress has been made in the understanding of how allosteric modulation of MAT function can be achieved. Here we will review all the efforts up to date that has been made through computational approaches employing structural models of MATs to design small molecule modulators to the orthosteric and allosteric sites using virtual screening techniques. PMID:26483692

  3. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  4. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  5. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  6. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  7. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  8. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  9. Design and Experimental Study on Spinning Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  10. Decision-oriented Optimal Experimental Design and Data Collection

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Classical optimal experimental design is a branch of statistics that seeks to construct ("design") a data collection effort ("experiment") that minimizes ("optimal") the uncertainty associated with some quantity of interest. In many real world problems, we are interested in these quantities to help us make a decision. Minimizing the uncertainty associated with the quantity can help inform the decision, but a more holistic approach is possible where the experiment is designed to maximize the information that it provides to the decision-making process. The difference is subtle, but it amounts to focusing on the end-goal (the decision) rather than an intermediary (the quantity). We describe one approach to decision-oriented optimal experimental design that utilizes Bayesian-Information-Gap Decision Theory which combines probabilistic and non-probabilistic methods for uncertainty quantification. In this approach, experimental designs that have a high probability of altering the decision are deemed worthwhile. On the other hand, experimental designs that have little chance or no of altering the decision need not be performed.

  11. Combining Usability Techniques to Design Geovisualization Tools for Epidemiology

    PubMed Central

    Robinson, Anthony C.; Chen, Jin; Lengerich, Eugene J.; Meyer, Hans G.; MacEachren, Alan M.

    2009-01-01

    Designing usable geovisualization tools is an emerging problem in GIScience software development. We are often satisfied that a new method provides an innovative window on our data, but functionality alone is insufficient assurance that a tool is applicable to a problem in situ. As extensions of the static methods they evolved from, geovisualization tools are bound to enable new knowledge creation. We have yet to learn how to adapt techniques from interaction designers and usability experts toward our tools in order to maximize this ability. This is especially challenging because there is limited existing guidance for the design of usable geovisualization tools. Their design requires knowledge about the context of work within which they will be used, and should involve user input at all stages, as is the practice in any human-centered design effort. Toward that goal, we have employed a wide range of techniques in the design of ESTAT, an exploratory geovisualization toolkit for epidemiology. These techniques include; verbal protocol analysis, card-sorting, focus groups, and an in-depth case study. This paper reports the design process and evaluation results from our experience with the ESTAT toolkit. PMID:19960106

  12. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  13. A new experimental flight research technique: The remotely piloted airplane

    NASA Technical Reports Server (NTRS)

    Layton, G. P.

    1976-01-01

    The results obtained so far with a remotely piloted research vehicle (RPRV) using a 3/8 scale model of an F-15 airplane, to determine the usefulness of the RPRV testing technique in high risk flight testing, including spin testing, were presented. The program showed that the RPRV technique, including the use of a digital control system, is a practical method for obtaining flight research data. The spin, stability, and control data obtained with the 3/8-scale model also showed that predictions based on wind-tunnel tests were generally reasonable.

  14. Techniques for analyzing lens manufacturing data with optical design applications

    NASA Astrophysics Data System (ADS)

    Kaufman, Morris I.; Light, Brandon B.; Malone, Robert M.; Gregory, Michael K.; Frayer, Daniel K.

    2015-09-01

    Optical designers assume a mathematically derived statistical distribution of the relevant design parameters for their Monte Carlo tolerancing simulations. However, there may be significant differences between the assumed distributions and the likely outcomes from manufacturing. Of particular interest for this study are the data analysis techniques and how they may be applied to optical and mechanical tolerance decisions. The effect of geometric factors and mechanical glass properties on lens manufacturability will be also be presented. Although the present work concerns lens grinding and polishing, some of the concepts and analysis techniques could also be applied to other processes such molding and single-point diamond turning.

  15. Phylogenetic information and experimental design in molecular systematics.

    PubMed Central

    Goldman, N

    1998-01-01

    Despite the widespread perception that evolutionary inference from molecular sequences is a statistical problem, there has been very little attention paid to questions of experimental design. Previous consideration of this topic has led to little more than an empirical folklore regarding the choice of suitable genes for analysis, and to dispute over the best choice of taxa for inclusion in data sets. I introduce what I believe are new methods that permit the quantification of phylogenetic information in a sequence alignment. The methods use likelihood calculations based on Markov-process models of nucleotide substitution allied with phylogenetic trees, and allow a general approach to optimal experimental design. Two examples are given, illustrating realistic problems in experimental design in molecular phylogenetics and suggesting more general conclusions about the choice of genomic regions, sequence lengths and taxa for evolutionary studies. PMID:9787470

  16. A comparison of controller designs for an experimental flexible structure

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Maghami, P. G.; Joshi, S. M.

    1991-01-01

    Control systems design and hardware testing are addressed for an experimental structure that displays the characteristics of a typical flexible spacecraft. The results of designing and implementing various control design methodologies are described. The design methodologies under investigation include linear quadratic Gaussian control, static and dynamic dissipative controls, and H-infinity optimal control. Among the three controllers considered, it is shown, through computer simulation and laboratory experiments on the evolutionary structure, that the dynamic dissipative controller gave the best results in terms of vibration suppression and robustness with respect to modeling errors.

  17. Inverse boundary-layer technique for airfoil design

    NASA Technical Reports Server (NTRS)

    Henderson, M. L.

    1979-01-01

    A description is presented of a technique for the optimization of airfoil pressure distributions using an interactive inverse boundary-layer program. This program allows the user to determine quickly a near-optimum subsonic pressure distribution which meets his requirements for lift, drag, and pitching moment at the desired flow conditions. The method employs an inverse turbulent boundary-layer scheme for definition of the turbulent recovery portion of the pressure distribution. Two levels of pressure-distribution architecture are used - a simple roof top for preliminary studies and a more complex four-region architecture for a more refined design. A technique is employed to avoid the specification of pressure distributions which result in unrealistic airfoils, that is, those with negative thickness. The program allows rapid evaluation of a designed pressure distribution off-design in Reynolds number, transition location, and angle of attack, and will compute an airfoil contour for the designed pressure distribution using linear theory.

  18. Comparison of experimental rotor damping data-reduction techniques

    NASA Technical Reports Server (NTRS)

    Warmbrodt, William

    1988-01-01

    The ability of existing data reduction techniques to determine frequency and damping from transient time-history records was evaluated. Analog data records representative of small-scale helicopter aeroelastic stability tests were analyzed. The data records were selected to provide information on the accuracy of reduced frequency and decay coefficients as a function of modal damping level, modal frequency, number of modes present in the time history record, proximity to other modes with different frequencies, steady offset in time history, and signal-to-noise ratio. The study utilized the results from each of the major U.S. helicopter manufacturers, the U.S. Army Aeroflightdynamics Directorate, and NASA Ames Research Center using their inhouse data reduction and analysis techniques. Consequently, the accuracy of different data analysis techniques and the manner in which they were implemented were also evaluated. It was found that modal frequencies can be accurately determined even in the presence of significant random and periodic noise. Identified decay coefficients do, however, show considerable variation, particularly for highly damped modes. The manner in which the data are reduced and the role of the data analyst was shown to be important. Although several different damping determination methods were used, no clear trends were evident for the observed differences between the individual analysis techniques. It is concluded that the data reduction of modal-damping characteristics from transient time histories results in a range of damping values.

  19. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  20. Optimizing experimental design for comparing models of brain function.

    PubMed

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-11-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  1. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  2. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  3. The use of experimental designs for corrosive oilfield systems

    SciTech Connect

    Biagiotti, S.F. Jr.; Frost, R.H.

    1997-08-01

    A Design of Experiment approach was used to investigate the effect of hydrogen sulfide, carbon dioxide and brine composition on the corrosion rate of carbon steel. Three of the most common experimental design approaches (Full Factorial, Taguchi L{sub 4}, and Alternate Fractional) were used to evaluate the results. This work concluded that: CO{sub 2} and brine both have significant main and two-factor effects on corrosion rate, H{sub 2}S concentration has a moderate effect on corrosion rate, and higher total dissolved solids (TDS) brine compositions appear to force gases out of solution, thereby decreasing the corrosion rate of carbon steel. The Full Factorial Design correctly identified all independent variables and the significant interactions between CO{sub 2}/H{sub 2}S and CO{sub 2}/Brine on corrosion rate. The two fractional factorial experimental methods resulted in incorrect conclusions. The Taguchi L{sub 4} method gave misleading results as it did not identify H{sub 2}S as having a positive effect on corrosion rate, and only identified the strong interactions in the experimental matrix. The Alternative Fractional design also yielded incorrect interpretations with regard to the effect of brine on corrosion. This study has shown that reduced experimental designs (e.g., half fractional) may be inappropriate for distinguishing the synergistic interactions likely to form in chemically reactive systems. Therefore, based upon the size of the data set collected in this work, the authors recommend that full factorial designs be used for corrosion evaluations. When the number of experimental variables make it impractical to perform a full factorial design, the aliasing relationships should be carefully evaluated.

  4. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  5. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  6. Low Cost Gas Turbine Off-Design Prediction Technique

    NASA Astrophysics Data System (ADS)

    Martinjako, Jeremy

    This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.

  7. Theory and experimental technique for nondestructive evaluation of ceramic composites

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1990-01-01

    The important ultrasonic scattering mechanisms for SiC and Si3N4 ceramic composites were identified by examining the interaction of ultrasound with individual fibers, pores, and grains. The dominant scattering mechanisms were identified as asymmetric refractive scattering due to porosity gradients in the matrix material, and symmetric diffractive scattering at the fiber-to-matrix interface and at individual pores. The effect of the ultrasonic reflection coefficient and surface roughness in the ultrasonic evaluation was highlighted. A new nonintrusive ultrasonic evaluation technique, angular power spectrum scanning (APSS), was presented that is sensitive to microstructural variations in composites. Preliminary results indicate that APSS will yield information on the composite microstructure that is not available by any other nondestructive technique.

  8. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  9. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  10. Design and experimental characterization of a multifrequency flexural ultrasonic actuator.

    PubMed

    Iula, Antonio

    2009-08-01

    In this work, a multifrequency flexural ultrasonic actuator is proposed, designed, and experimentally characterized. The actuator is composed of a Langevin transducer and of a displacement amplifier. The displacement amplifier is able to transform the almost flat axial displacement provided by the Langevin transducer at its back end into a flexural deformation that produces the maximum axial displacement at the center of its front end. Design and analysis of the actuator have been performed by using finite element method software. In analogy to classical power actuators that use sectional concentrators, the design criterion that has been followed was to design the Langevin transducer and the flexural amplifier separately at the same working frequency. As opposed to sectional concentrators, the flexural amplifier has several design parameters that allow a wide flexibility in the design. The flexural amplifier has been designed to produce a very high displacement amplification. It has also been designed in such a way that the whole actuator has 2 close working frequencies (17.4 kHz and 19.2 kHz), with similar flexural deformations of the front surface. A first prototype of the actuator has been manufactured and experimentally characterized to validate the numerical analysis. PMID:19686988

  11. Application of hazard assessment techniques in the CISF design process

    SciTech Connect

    Thornton, J.R.; Henry, T.

    1997-10-29

    The Department of Energy has submitted to the NRC staff for review a topical safety analysis report (TSAR) for a Centralized Interim Storage Facility (CISF). The TSAR will be used in licensing the CISF when and if a site is designated. CISF1 design events are identified based on thorough review of design basis events (DBEs) previously identified by dry storage system suppliers and licensees and through the application of hazard assessment techniques. A Preliminary Hazards Assessment (PHA) is performed to identify design events applicable to a Phase 1 non site specific CISF. A PHA is deemed necessary since the Phase 1 CISF is distinguishable from previous dry store applications in several significant operational scope and design basis aspects. In addition to assuring all design events applicable to the Phase 1 CISF are identified, the PHA served as an integral part of the CISF design process by identifying potential important to safety and defense in depth facility design and administrative control features. This paper describes the Phase 1 CISF design event identification process and summarizes significant PHA contributions to the CISF design.

  12. A technique for optimizing the design of power semiconductor devices

    NASA Technical Reports Server (NTRS)

    Schlegel, E. S.

    1976-01-01

    A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.

  13. High-Throughput Computational and Experimental Techniques in Structural Genomics

    PubMed Central

    Chance, Mark R.; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J. Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa

    2004-01-01

    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell. PMID:15489337

  14. Experimental validation of tilt measurement technique with a laser beacon

    NASA Astrophysics Data System (ADS)

    Belen'kii, Mikhail S.; Karis, Stephen J.; Brown, James M.; Fugate, Robert Q.

    1999-09-01

    We have experimentally demonstrated for the first time a method for sensing wavefront tilt with a laser guide star (LGS). The tilt components of wavefronts were measured synchronously from the LGS using a telescope with 0.75 m effective aperture and from Polaris using a 1.5 m telescope. The Rayleigh guide star was formed at the altitude of 6 km and at a corresponding range of 10.5 km by projecting a focused beam at Polaris from the full aperture at the 1.5 m telescope. Both telescope mounts were unpowered and bottled down in place allowing us to substantially reduce the telescope vibration. The maximum value of the measured cross-correlation coefficient between the tilt for Polaris and the LGS is 0.71. The variations of the measured cross- correlation coefficient in the range from 0.22 to 0.71 are caused by turbulence at altitudes above 6 km, which was not sampled by the laser beacon, but affected the tilt for Polaris. It is also caused by the cone effect for turbulence below 6 km, residual mount jitter of the telescopes, and variations of the S/N. The experimental results support our concept of sensing atmospheric tilt by observing a LGS with an auxiliary telescope and indicate that this method is a possible solution for the tip-tilt problem.

  15. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  16. Experimental techniques in ultrasonics for NDE and material characterization

    NASA Astrophysics Data System (ADS)

    Tittmann, B. R.

    A development status evaluation is presented for ultrasonics NDE characterization of aerospace alloys and composites in such application as the Space Shuttle, Space Station Freedom, and hypersonic aircraft. The use of such NDE techniques extends to composite-cure monitoring, postmanufacturing quality assurance, and in-space service inspection of such materials as graphite/epoxy, Ti alloys, and Al honeycomb. Attention is here given to the spectroscopy of elastically scattered wave pulses from flaws, the acoustical imaging of flaws in honeycomb structures, and laser-based ultrasonics for the noncontact inspection of composite structures.

  17. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  18. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  19. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  20. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  1. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  2. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  3. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  4. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  5. An experimental modal testing/identification technique for personal computers

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph

    1990-01-01

    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  6. Super-smooth surface fabrication technique and experimental research.

    PubMed

    Zhang, Linghua; Wang, Junlin; Zhang, Jian

    2012-09-20

    Wheel polishing, a new optical fabrication technique, is proposed for super-smooth surface fabrication of optical components in high-precision optical instruments. The machining mechanism and the removal function contours are investigated in detail. The elastohydrodynamic lubrication theory is adopted to analyze the deformation of the wheel head, the pressure distribution, and the fluid film thickness distribution in the narrow machining zone. The pressure and the shear stress distributions at the interface between the slurry and the sample are numerically simulated. Practical polishing experiments are arranged to analyze the relationship between the wheel-sample distance and the machining rate. It is demonstrated in this paper that the wheel-sample distance will directly influence the removal function contours. Moreover, ripples on the wheel surface will eventually induce the transverse prints on the removal function contours. The surface roughness of fused silicon is reduced to less than 0.5 nm (rms) from initial 1.267 nm (rms). The wheel polishing technique is feasible for super-smooth surface fabrication. PMID:23033032

  7. Experimental techniques for in-ring reaction experiments

    NASA Astrophysics Data System (ADS)

    Mutterer, M.; Egelhof, P.; Eremin, V.; Ilieva, S.; Kalantar-Nayestanaki, N.; Kiselev, O.; Kollmus, H.; Kröll, T.; Kuilman, M.; Chung, L. X.; Najafi, M. A.; Popp, U.; Rigollet, C.; Roy, S.; von Schmid, M.; Streicher, B.; Träger, M.; Yue, K.; Zamora, J. C.; the EXL Collaboration

    2015-11-01

    As a first step of the EXL project scheduled for the New Experimental Storage Ring at FAIR a precursor experiment (E105) was performed at the ESR at GSI. For this experiment, an innovative differential pumping concept, originally proposed for the EXL recoil detector ESPA, was successfully applied. The implementation and essential features of this novel technical concept will be discussed, as well as details on the detectors and the infrastructure around the internal gas-jet target. With 56Ni(p, p)56Ni elastic scattering at 400 MeV u-1, a nuclear reaction experiment with stored radioactive beams was realized for the first time. Finally, perspectives for a next-generation EXL-type setup are briefly discussed.

  8. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  9. Thermoelastic Femoral Stress Imaging for Experimental Evaluation of Hip Prosthesis Design

    NASA Astrophysics Data System (ADS)

    Hyodo, Koji; Inomoto, Masayoshi; Ma, Wenxiao; Miyakawa, Syunpei; Tateishi, Tetsuya

    An experimental system using the thermoelastic stress analysis method and a synthetic femur was utilized to perform reliable and convenient mechanical biocompatibility evaluation of hip prosthesis design. Unlike the conventional technique, the unique advantage of the thermoelastic stress analysis method is its ability to image whole-surface stress (Δ(σ1+σ2)) distribution in specimens. The mechanical properties of synthetic femurs agreed well with those of cadaveric femurs with little variability between specimens. We applied this experimental system for stress distribution visualization of the intact femur, and the femurs implanted with an artificial joint. The surface stress distribution of the femurs sensitively reflected the prosthesis design and the contact condition between the stem and the bone. By analyzing the relationship between the stress distribution and the clinical results of the artificial joint, this technique can be used in mechanical biocompatibility evaluation and pre-clinical performance prediction of new artificial joint design.

  10. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  11. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  12. Uncertainty analysis on the design of thermal conductivity measurement by a guarded cut-bar technique

    NASA Astrophysics Data System (ADS)

    Xing, Changhu; Jensen, Colby; Ban, Heng; Phillips, Jeffrey

    2011-07-01

    A technique adapted from the guarded-comparative-longitudinal heat flow method was selected for the measurement of the thermal conductivity of a nuclear fuel compact over a temperature range characteristic of its usage. This technique fulfills the requirement for non-destructive measurement of the composite compact. Although numerous measurement systems have been created based on the guarded-comparative method, comprehensive systematic (bias) and measurement (precision) uncertainty associated with this technique have not been fully analyzed. In addition to the geometric effect in the bias error, which has been analyzed previously, this paper studies the working condition which is another potential error source. Using finite element analysis, this study showed the effect of these two types of error sources in the thermal conductivity measurement process and the limitations in the design selection of various parameters by considering their effect on the precision error. The results and conclusions provide valuable reference for designing and operating an experimental measurement system using this technique.

  13. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  14. Time-Dependent Reversible-Irreversible Deformation Threshold Determined Explicitly by Experimental Technique

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.; Arnold, Steven M.

    2000-01-01

    Structural materials for the design of advanced aeropropulsion components are usually subject to loading under elevated temperatures, where a material's viscosity (resistance to flow) is greatly reduced in comparison to its viscosity under low-temperature conditions. As a result, the propensity for the material to exhibit time-dependent deformation is significantly enhanced, even when loading is limited to a quasi-linear stress-strain regime as an effort to avoid permanent (irreversible) nonlinear deformation. An understanding and assessment of such time-dependent effects in the context of combined reversible and irreversible deformation is critical to the development of constitutive models that can accurately predict the general hereditary behavior of material deformation. To this end, researchers at the NASA Glenn Research Center at Lewis Field developed a unique experimental technique that identifies the existence of and explicitly determines a threshold stress k, below which the time-dependent material deformation is wholly reversible, and above which irreversible deformation is incurred. This technique is unique in the sense that it allows, for the first time, an objective, explicit, experimental measurement of k. The underlying concept for the experiment is based on the assumption that the material s time-dependent reversible response is invariable, even in the presence of irreversible deformation.

  15. A Constrainted Design Approach for NLF Airfoils by Coupling Inverse Design and Optimal Techniques

    NASA Astrophysics Data System (ADS)

    Deng, L.; Gao, Y. W.; Qiao, Z. D.

    2011-09-01

    In present paper, a design method for natural laminar flow (NLF) airfoils with a substantial amount of natural laminar flow on both surfaces by coupling inverse design method and optimal technique is developed. The N-factor method is used to design the target pressure distributions before pressure recovery region with desired transition locations while maintaining aerodynamics constraints. The pressure in recovery region is designed according to Stratford separation criteria to prevent the laminar separation. In order to improve the off-design performance in inverse design, a multi-point inverse design is performed. An optimal technique based on response surface methodology (RSM) is used to calculate the target airfoil shapes according to the designed target pressure distributions. The set of design points is selected to satisfy the D-optimality and the reduced quadratic polynomial RS models without the 2nd-order cross items are constructed to reduce the computational cost. The design cases indicated that by the coupling-method developed in present paper, the inverse design method can be used in multi-point design to improve the off-design performance and the airfoils designed have the desired transition locations and maintain the aerodynamics constraints while the thickness constraint is difficult to meet in this design procedure.

  16. Experimental investigation of iterative reconstruction techniques for high resolution mammography

    NASA Astrophysics Data System (ADS)

    Vengrinovich, Valery L.; Zolotarev, Sergei A.; Linev, Vladimir N.

    2014-02-01

    The further development of the new iterative reconstruction algorithms to improve three-dimensional breast images quality restored from incomplete and noisy mammograms, is provided. The algebraic reconstruction method with simultaneous iterations - Simultaneous Algebraic Reconstruction Technique (SART) and the iterative method of statistical reconstruction Bayesian Iterative Reconstruction (BIR) are referred here as the preferable iterative methods suitable to improve the image quality. For better processing we use the Graphics Processing Unit (GPU). Method of minimizing the Total Variation (TV) is used as a priori support for regularization of iteration process and to reduce the level of noise in the reconstructed image. Preliminary results with physical phantoms show that all examined methods are capable to reconstruct structures layer-by-layer and to separate layers which images are overlapped in the Z- direction. It was found that the method of traditional Shift-And-Add tomosynthesis (SAA) is worse than iterative methods SART and BIR in terms of suppression of the anatomical noise and image blurring in between the adjacent layers. Despite of the fact that the measured contrast/noise ratio in the presence of low contrast internal structures is higher for the method of tomosynthesis SAA than for SART and BIR methods, its effectiveness in the presence of structured background is rather poor. In our opinion the optimal results can be achieved using Bayesian iterative reconstruction BIR.

  17. New head gradient coil design and construction techniques

    PubMed Central

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2013-01-01

    Purpose To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. Materials and Methods The use of the Boundary Element Method to solve for a gradient coil wire pattern on an arbitrary surface has allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design has been combined with robust manufacturing techniques and novel cooling methods. Results The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. Conclusion The ability to adapt your electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. PMID:24123485

  18. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  19. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  20. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  1. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  2. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  3. A 1-Dimensional Chaotic IC Designed by SI Techniques

    NASA Astrophysics Data System (ADS)

    Eguchi, Kei; Zhu, Hongbing; Tabata, Toru; Ueno, Fumio; Inoue, Takahiro

    In this paper, a VLSI chip of a discrete-time chaos circuit realizing a tent map is reported. The VLSI chip is fabricated in the chip fabrication program of VLSI Design and Education Center (VDEC). A simple structure enables us to realize the circuit with 10 MOSFET’s and 2 capacitors. Furthermore, the circuit which is designed by switched-current (SI) techniques can operate at 3V power supply. The experiment concerning the VLSI chip shows that the proposed circuit is integrable by a standard 1.2 μm CMOS technology.

  4. Translocations of amphibians: Proven management method or experimental technique

    USGS Publications Warehouse

    Seigel, Richard A.; Dodd, C. Kenneth, Jr.

    2002-01-01

    In an otherwise excellent review of metapopulation dynamics in amphibians, Marsh and Trenham (2001) make the following provocative statements (emphasis added): If isolation effects occur primarily in highly disturbed habitats, species translocations may be necessary to promote local and regional population persistence. Because most amphibians lack parental care, they areprime candidates for egg and larval translocations. Indeed, translocations have already proven successful for several species of amphibians. Where populations are severely isolated, translocations into extinct subpopulations may be the best strategy to promote regional population persistence. We take issue with these statements for a number of reasons. First, the authors fail to cite much of the relevant literature on species translocations in general and for amphibians in particular. Second, to those unfamiliar with current research in amphibian conservation biology, these comments might suggest that translocations are a proven management method. This is not the case, at least in most instances where translocations have been evaluated for an appropriate period of time. Finally, the authors fail to point out some of the negative aspects of species translocation as a management method. We realize that Marsh and Trenham's paper was not concerned primarily with translocations. However, because Marsh and Trenham (2001) made specific recommendations for conservation planners and managers (many of whom are not herpetologists or may not be familiar with the pertinent literature on amphibians), we believe that it is essential to point out that not all amphibian biologists are as comfortable with translocations as these authors appear to be. We especially urge caution about advocating potentially unproven techniques without a thorough review of available options.

  5. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    NASA Technical Reports Server (NTRS)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  6. Photographic-assisted prosthetic design technique for the anterior teeth.

    PubMed

    Zaccaria, Massimiliano; Squadrito, Nino

    2015-01-01

    The aim of this article is to propose a standardized protocol for treating all inesthetic anterior maxillary situations using a well-planned clinical and photographic technique. As inesthetic aspects should be treated as a pathology, instruments to make a diagnosis are necessary. The prosthetic design to resolve inesthetic aspects, in respect of the function, should be considered a therapy, and, as such, instruments to make a prognosis are necessary. A perspective study was conducted to compare the involvement of patients with regard to the alterations to be made, initially with only a graphic esthetic previsualization, and later with an intraoral functional and esthetic previsualization. Significantly different results were shown for the two techniques. The instruments and steps necessary for the intraoral functional and esthetic previsualization technique are explained in detail in this article. PMID:25625127

  7. Development of a complex experimental system for controlled ecological life support technique

    NASA Astrophysics Data System (ADS)

    Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.

    A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique

  8. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  9. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  10. Experimental techniques for evaluating steady-state jet engine performance in an altitude facility

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Young, C. Y.; Antl, R. J.

    1971-01-01

    Jet engine calibration tests were conducted in an altitude facility using a contoured bellmouth inlet duct, four fixed-area water-cooled exhaust nozzles, and an accurately calibrated thrust measuring system. Accurate determination of the airflow measuring station flow coefficient, the flow and thrust coefficients of the exhaust nozzles, and the experimental and theoretical terms in the nozzle gross thrust equation were some of the objectives of the tests. A primary objective was to develop a technique to determine gross thrust for the turbojet engine used in this test that could also be used for future engine and nozzle evaluation tests. The probable error in airflow measurement was found to be approximately 0.6 percent at the bellmouth throat design Mach number of 0.6. The probable error in nozzle gross thrust measurement was approximated 0.6 percent at the load cell full-scale reading.

  11. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  12. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  13. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  14. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  15. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques. PMID:20953680

  16. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  17. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  18. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation. PMID:25155192

  19. Validation of erythromycin microbiological assay using an alternative experimental design.

    PubMed

    Lourenço, Felipe Rebello; Kaneko, Telma Mary; Pinto, Terezinha de Jesus Andreoli

    2007-01-01

    The agar diffusion method, widely used in antibiotic dosage, relates the diameter of the inhibition zone to the dose of the substance assayed. An experimental plan is proposed that may provide better results and an indication of the assay validity. The symmetric or balanced assays (2 x 2) as well as those with interpolation in standard curve (5 x 1) are the main designs used in the dosage of antibiotics. This study proposes an alternative experimental design for erythromycin microbiological assay with the evaluation of the validation parameters of the method referring to linearity, precision, and accuracy. The design proposed (3 x 1) uses 3 doses of standard and 1 dose of sample applied in a unique plate, aggregating the characteristics of the 2 x 2 and 5 x 1 assays. The method was validated for erythromycin microbiological assay through agar diffusion, revealing its adequacy to linearity, precision, and accuracy standards. Likewise, the statistical methods used demonstrated their accordance with the method concerning the parameters evaluated. The 3 x 1 design proved to be adequate for the dosage of erythromycin and thus a good alternative for erythromycin assay. PMID:17760348

  20. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  1. Design and experimental validation of looped-tube thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Abduljalil, Abdulrahman S.; Yu, Zhibin; Jaworski, Artur J.

    2011-10-01

    The aim of this paper is to present the design and experimental validation process for a thermoacoustic looped-tube engine. The design procedure consists of numerical modelling of the system using DELTA EC tool, Design Environment for Low-amplitude ThermoAcoustic Energy Conversion, in particular the effects of mean pressure and regenerator configuration on the pressure amplitude and acoustic power generated. This is followed by the construction of a practical engine system equipped with a ceramic regenerator — a substrate used in automotive catalytic converters with fine square channels. The preliminary testing results are obtained and compared with the simulations in detail. The measurement results agree very well on the qualitative level and are reasonably close in the quantitative sense.

  2. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  3. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  4. Computational design and experimental verification of a symmetric protein homodimer.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Hsu, Fang-Ciao; Huang, Shing-Jong; Mayo, Stephen L

    2015-08-25

    Homodimers are the most common type of protein assembly in nature and have distinct features compared with heterodimers and higher order oligomers. Understanding homodimer interactions at the atomic level is critical both for elucidating their biological mechanisms of action and for accurate modeling of complexes of unknown structure. Computation-based design of novel protein-protein interfaces can serve as a bottom-up method to further our understanding of protein interactions. Previous studies have demonstrated that the de novo design of homodimers can be achieved to atomic-level accuracy by β-strand assembly or through metal-mediated interactions. Here, we report the design and experimental characterization of a α-helix-mediated homodimer with C2 symmetry based on a monomeric Drosophila engrailed homeodomain scaffold. A solution NMR structure shows that the homodimer exhibits parallel helical packing similar to the design model. Because the mutations leading to dimer formation resulted in poor thermostability of the system, design success was facilitated by the introduction of independent thermostabilizing mutations into the scaffold. This two-step design approach, function and stabilization, is likely to be generally applicable, especially if the desired scaffold is of low thermostability. PMID:26269568

  5. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

  6. Study of automatic designing of line heating technique parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Jun; Guo, Pei-Jun; Deng, Yan-Ping; Ji, Zhuo-Shang; Wang, Ji; Zhou, Bo; Yang, Hong; Zhao, Pi-Dong

    2006-03-01

    Based on experimental data of line heating, the methods of vector mapping, plane projection, and coordinate converting are presented to establish the spectra for line heating distortion discipline which shows the relationship between process parameters and distortion parameters of line heating. Back-propagation network (BP-net) is used to modify the spectra. Mathematical models for optimizing line heating techniques parameters, which include two-objective functions, are constructed. To convert the multi-objective optimization into a single-objective one, the method of changing weight coefficient is used, and then the individual fitness function is built up. Taking the number of heating lines, distance between the heating lines' border (line space), and shrink quantity of lines as three restrictive conditions, a hierarchy genetic algorithm (HGA) code is established by making use of information provided by the spectra, in which inner coding and outer coding adopt different heredity arithmetic operators in inherent operating. The numerical example shows that the spectra for line heating distortion discipline presented here can provide accurate information required by techniques parameter prediction of line heating process and the technique parameter optimization method based on HGA provided here can obtain good results for hull plate.

  7. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  8. Theoretical and experimental evaluation of broadband decoupling techniques for in vivo nuclear magnetic resonance spectroscopy.

    PubMed

    de Graaf, Robin A

    2005-06-01

    A theoretical and experimental evaluation of existing broadband decoupling methods with respect to their utility for in vivo (1)H-(13)C NMR spectroscopy is presented. Simulations are based on a modified product operator formalism, while an experimental evaluation is performed on in vitro samples and human leg and rat brain in vivo. The performance of broadband decoupling methods was evaluated with respect to the required peak and average RF powers, decoupling bandwidth, decoupling side bands, heteronuclear scalar coupling constant, and sensitivity toward B(2) inhomogeneity. In human applications only the WALTZ and MLEV decoupling methods provide adequate decoupling performance at RF power levels that satisfy the FDA guidelines on local tissue heating. For very low RF power levels (B(2max) < 300 Hz) one should verify empirically whether the experiment will benefit from broadband decoupling. At higher RF power levels acceptable for animal studies additional decoupling techniques become available and provide superior performance. Since the average RF power of adiabatic RF pulses is almost always significantly lower than the peak RF power, it can be stated that for average RF powers suitable for animal studies it is always possible to design an adiabatic decoupling scheme that outperforms all other schemes. B(2) inhomogeneity degrades the decoupling performance of all methods, but the decoupling bandwidths for WALTZ-16 and especially adiabatic methods are still satisfactory for useful in vivo decoupling with a surface coil. PMID:15906279

  9. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  10. LeRC rail accelerators - Test designs and diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.

    1984-01-01

    The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed. Previously announced in STAR as N83-35053

  11. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  13. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  14. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  15. Optimization of experimental designs by incorporating NIF facility impacts

    NASA Astrophysics Data System (ADS)

    Eder, D. C.; Whitman, P. K.; Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T.; Parham, T. G.; Koerner, J. G.; Dixit, S. N.; Suratwala, T. I.; Blue, B. E.; Hansen, J. F.; Tobin, M. T.; Robey, H. F.; Spaeth, M. L.; MacGowan, B. J.

    2006-06-01

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) blocks the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, fast moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to assure that all NIF experimental campaigns meet the requirements on allowed level of debris and shrapnel generation.

  16. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnetic systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  17. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2015-11-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  18. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved. PMID:26597677

  19. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  20. Determining the extent of coarticulation: effects of experimental design.

    PubMed

    Gelfer, C E; Bell-Berti, F; Harris, K S

    1989-12-01

    The purpose of this letter is to explore some reasons for what appear to be conflicting reports regarding the nature and extent of anticipatory coarticulation, in general, and anticipatory lip rounding, in particular. Analyses of labial electromyographic and kinematic data using a minimal-pair paradigm allowed for the differentiation of consonantal and vocalic effects, supporting a frame versus a feature-spreading model of coarticulation. It is believed that the apparent conflicts of previous studies of anticipatory coarticulation might be resolved if experimental design made more use of contrastive minimal pairs and relied less on assumptions about feature specifications of phones. PMID:2600314

  1. Design optimum frac jobs using virtual intelligence techniques

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  2. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current

  3. Experimental verification of photon angular momentum and vorticity with radio techniques

    NASA Astrophysics Data System (ADS)

    Tamburini, Fabrizio; Mari, Elettra; Thidé, Bo; Barbieri, Cesare; Romanato, Filippo

    2011-11-01

    The experimental evidence that radio techniques can be used for synthesizing and analyzing non-integer electromagnetic (EM) orbital angular momentum (OAM) of radiation is presented. The technique used amounts to sample, in space and time, the EM field vectors and digitally processing the data to calculate the vortex structure, the spatial phase distribution, and the OAM spectrum of the radiation. The experimental verification that OAM-carrying beams can be readily generated and exploited by using radio techniques paves the way to an entirely new paradigm of radar and radio communication protocols.

  4. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  5. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  6. Design preferences and cognitive styles: experimentation by automated website synthesis

    PubMed Central

    2012-01-01

    Background This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs. Methods The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation. Results In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests. Conclusions This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain. PMID:22748000

  7. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged. PMID:26065534

  8. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  9. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  10. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  11. Reduction of animal use: experimental design and quality of experiments.

    PubMed

    Festing, M F

    1994-07-01

    Poorly designed and analysed experiments can lead to a waste of scientific resources, and may even reach the wrong conclusions. Surveys of published papers by a number of authors have shown that many experiments are poorly analysed statistically, and one survey suggested that about a third of experiments may be unnecessarily large. Few toxicologists attempted to control variability using blocking or covariance analysis. In this study experimental design and statistical methods in 3 papers published in toxicological journals were used as case studies and were examined in detail. The first used dogs to study the effects of ethanol on blood and hepatic parameters following chronic alcohol consumption in a 2 x 4 factorial experimental design. However, the authors used mongrel dogs of both sexes and different ages with a wide range of body weights without any attempt to control the variation. They had also attempted to analyse a factorial design using Student's t-test rather than the analysis of variance. Means of 2 blood parameters presented with one decimal place had apparently been rounded to the nearest 5 units. It is suggested that this experiment could equally well have been done in 3 blocks using 24 instead of 46 dogs. The second case study was an investigation of the response of 2 strains of mice to a toxic agent causing bladder injury. The first experiment involved 40 treatment combinations (2 strains x 4 doses x 5 days) with 3-6 mice per combination. There was no explanation of how the experiment involving approximately 180 mice had actually been done, but unequal subclass numbers suggest that the experiment may have been done on an ad hoc basis rather than being properly designed. It is suggested that the experiment could have been done as 2 blocks involving 80 instead of about 180 mice. The third study again involved a factorial design with 4 dose levels of a compound and 2 sexes, with a total of 80 mice. Open field behaviour was examined. The author

  12. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  13. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  14. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  15. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  16. A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.

    ERIC Educational Resources Information Center

    Johnson, Craig W.

    1986-01-01

    A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…

  17. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  18. Technique to model and design physical database systems

    SciTech Connect

    Wise, T.E.

    1983-12-01

    Database management systems (DBMSs) allow users to define and manipulate records at a logical level of abstraction. A logical record is not stored as users see it but is mapped into a collection of physical records. Physical records are stored in file structures managed by a DBMS. Likewise, DBMS commands which appear to be directed toward one or more logical records actually correspond to a series of operations on the file structures. The structures and operations of a DBMS (i.e., its physical architecture) are not visible to users at the logical level. Traditionally, logical records and DBMS commands are mapped to physical records and operations in one step. In this report, logical records are mapped to physical records in a series of steps over several levels of abstraction. Each level of abstraction is composed of one or more intermediate record types. A hierarchy of record types results that covers the gap between logical and physical records. The first step of our technique identifies the record types and levels of abstraction that describe a DBMS. The second step maps DBMS commands to physical operations in terms of these records and levels of abstraction. The third step encapsulates each record type and its operations into a programming construct called a module. The applications of our technique include modeling existing DBMSs and designing the physical architectures of new DBMSs. To illustrate one application, we describe in detail the architecture of the commercial DBMS INQUIRE.

  19. Interplanetary mission design techniques for flagship-class missions

    NASA Astrophysics Data System (ADS)

    Kloster, Kevin W.

    Trajectory design, given the current level of propulsive technology, requires knowledge of orbital mechanics, computational resources, extensive use of tools such as gravity-assist and V infinity leveraging, as well as insight and finesse. Designing missions that deliver a capable science package to a celestial body of interest that are robust and affordable is a difficult task. Techniques are presented here that assist the mission designer in constructing trajectories for flagship-class missions in the outer Solar System. These techniques are applied in this work to spacecraft that are currently in flight or in the planning stages. By escaping the Saturnian system, the Cassini spacecraft can reach other destinations in the Solar System while satisfying planetary quarantine. The patched-conic method was used to search for trajectories that depart Saturn via gravity assist at Titan. Trajectories were found that fly by Jupiter to reach Uranus or Neptune, capture at Jupiter or Neptune, escape the Solar System, fly by Uranus during its 2049 equinox, or encounter Centaurs. A "grand tour," which visits Jupiter, Uranus, and Neptune, departs Saturn in 2014. New tools were built to search for encounters with Centaurs, small Solar System bodies between the orbits of Jupiter and Neptune, and to minimize the DeltaV to target these encounters. Cassini could reach Chiron, the first-discovered Centaur, in 10.5 years after a 2022 Saturn departure. For a Europa Orbiter mission, the strategy for designing Jovian System tours that include Io flybys differs significantly from schemes developed for previous versions of the mission. Assuming that the closest approach distance of the incoming hyperbola at Jupiter is below the orbit of Io, then an Io gravity assist gives the greatest energy pump-down for the least decrease in perijove radius. Using Io to help capture the spacecraft can increase the savings in Jupiter orbit insertion DeltaV over a Ganymede-aided capture. The tour design is

  20. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands. PMID:22020491

  1. Design and experimental results of coaxial circuits for gyroklystron amplifiers

    SciTech Connect

    Flaherty, M.K.E.; Lawson, W.; Cheng, J.; Calame, J.P.; Hogan, B.; Latham, P.E.; Granatstein, V.L.

    1994-12-31

    At the University of Maryland high power microwave source development for use in linear accelerator applications continues with the design and testing of coaxial circuits for gyroklystron amplifiers. This presentation will include experimental results from a coaxial gyroklystron that was tested on the current microwave test bed, and designs for second harmonic coaxial circuits for use in the next generation of the gyroklystron program. The authors present test results for a second harmonic coaxial circuit. Similar to previous second harmonic experiments the input cavity resonated at 9.886 GHz and the output frequency was 19.772 GHz. The coaxial insert was positioned in the input cavity and drift region. The inner conductor consisted of a tungsten rod with copper and ceramic cylinders covering its length. Two tungsten rods that bridged the space between the inner and outer conductors supported the whole assembly. The tube produced over 20 MW of output power with 17% efficiency. Beam interception by the tungsten rods resulted in minor damage. Comparisons with previous non-coaxial circuits showed that the coaxial configuration increased the parameter space over which stable operation was possible. Future experiments will feature an upgraded modulator and beam formation system capable of producing 300 MW of beam power. The fundamental frequency of operation is 8.568 GHz. A second harmonic coaxial gyroklystron circuit was designed for use in the new system. A scattering matrix code predicts a resonant frequency of 17.136 GHz and Q of 260 for the cavity with 95% of the outgoing microwaves in the desired TE032 mode. Efficiency studies of this second harmonic output cavity show 20% expected efficiency. Shorter second harmonic output cavity designs are also being investigated with expected efficiencies near 34%.

  2. Focusing Kinoform Lenses: Optical Design and Experimental Validation

    SciTech Connect

    Alianelli, Lucia; Sawhney, Kawal J. S.; Snigireva, Irina; Snigirev, Anatoly

    2010-06-23

    X-ray focusing lenses with a kinoform profile are high brilliance optics that can produce nano-sized beams on 3rd generation synchrotron beamlines. The lenses are fabricated with sidewalls of micrometer lateral size. They are virtually non-absorbing and therefore can deliver a high flux over a good aperture. We are developing silicon and germanium lenses that will focus hard x-ray beams to less than 0.5 {mu}m size using a single refractive element. In this contribution, we present preliminary optical design and experimental test carried out on ID06 ESRF: the lenses were used to image directly the undulator source, providing a beam with fwhm of about 0.7 {mu}m.

  3. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  4. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  5. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  6. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  7. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are

  8. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  9. Case study 1. Practical considerations with experimental design and interpretation.

    PubMed

    Barr, John T; Flora, Darcy R; Iwuchukwu, Otito F

    2014-01-01

    At some point, anyone with knowledge of drug metabolism and enzyme kinetics started out knowing little about these topics. This chapter was specifically written with the novice in mind. Regardless of the enzyme one is working with or the goal of the experiment itself, there are fundamental components and concepts of every experiment using drug metabolism enzymes. The following case studies provide practical tips, techniques, and answers to questions that may arise in the course of conducting such experiments. Issues ranging from assay design and development to data interpretation are addressed. The goal of this section is to act as a starting point to provide the reader with key questions and guidance while attempting his/her own work. PMID:24523122

  10. Experimental design considerations in microbiota/inflammation studies.

    PubMed

    Moore, Robert J; Stanley, Dragana

    2016-07-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065