Science.gov

Sample records for experimental design techniques

  1. Chemometric experimental design based optimization techniques in capillary electrophoresis: a critical review of modern applications.

    PubMed

    Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A

    2008-01-01

    A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.

  2. An Artificial Intelligence Technique to Generate Self-Optimizing Experimental Designs.

    DTIC Science & Technology

    1983-02-01

    pattern or a binary chopping technique in the space of decision variables while carrying out a sequence of contiroLled experiments on the strategy ...7 AD-A127 764 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE 1/1 SELF-OPTIMIZING EXPERIME. .(U) ARIZONA STATE UNIV TEMPE GROUP FOR COMPUTER STUDIES...6 3 A - - II 1* Ii.LI~1 11. AI-. jMR.TR- 3 0 3 37 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE SELF-OPTIMIZING EXPERIMENTAL DESIGNS Nicholas V

  3. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  4. Synthesis of designed materials by laser-based direct metal deposition technique: Experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Qi, Huan

    Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an

  5. Optimization and enhancement of soil bioremediation by composting using the experimental design technique.

    PubMed

    Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni

    2010-06-01

    The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.

  6. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  7. Development of a fast, lean and agile direct pelletization process using experimental design techniques.

    PubMed

    Politis, Stavros N; Rekkas, Dimitrios M

    2017-04-01

    A novel hot melt direct pelletization method was developed, characterized and optimized, using statistical thinking and experimental design tools. Mixtures of carnauba wax (CW) and HPMC K100M were spheronized using melted gelucire 50-13 as a binding material (BM). Experimentation was performed sequentially; a fractional factorial design was set up initially to screen the factors affecting the process, namely spray rate, quantity of BM, rotor speed, type of rotor disk, lubricant-glidant presence, additional spheronization time, powder feeding rate and quantity. From the eight factors assessed, three were further studied during process optimization (spray rate, quantity of BM and powder feeding rate), at different ratios of the solid mixture of CW and HPMC K100M. The study demonstrated that the novel hot melt process is fast, efficient, reproducible and predictable. Therefore, it can be adopted in a lean and agile manufacturing setting for the production of flexible pellet dosage forms with various release rates easily customized between immediate and modified delivery.

  8. Comparison of neuropsychological rehabilitation techniques for unilateral neglect: an ABACADAEAF single-case experimental design.

    PubMed

    Tunnard, Catherine; Wilson, Barbara A

    2014-01-01

    Unilateral neglect is a debilitating attentional disorder whereby patients fail to report, respond or orient to information presented on one side of space. Previous studies have demonstrated improvements in neglect symptoms using rehabilitation techniques, such as anchoring or limb activation. We investigated the effectiveness of five interventions in reducing the unilateral neglect observed in patient F.P. A single-case ABACADAEAF design was used to investigate the effectiveness of musical stimulation (B), anchoring (C), vibratory stimulation (D), limb activation (E), and anchoring and vibratory stimulation combined (F), compared to baseline (A). Severity of neglect was measured using star cancellation, line crossing and line bisection tests. Tau-U statistical analyses were used to investigate significant differences between conditions. All interventions resulted in improvements in F.P.'s neglect. Anchoring (C), vibratory stimulation (D) and the combination of these two techniques (F) led to greatest improvements on all three tests of neglect. Musical stimulation led to improvements on the line bisection task only. Anchoring and vibratory stimulation were the most effective techniques for reducing neglect for this patient. Further research is needed to investigate whether the observed gains can be sustained on a longer-term basis, generalised to other tasks, and replicated in larger samples.

  9. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  10. Effect of an experimental design for evaluating the nonlinear optimal formulation of theophylline tablets using a bootstrap resampling technique.

    PubMed

    Arai, Hiroaki; Suzuki, Tatsuya; Kaseda, Chosei; Takayama, Kozo

    2009-06-01

    The optimal solutions of theophylline tablet formulations based on datasets from 4 experimental designs (Box and Behnken design, central composite design, D-optimal design, and full factorial design) were calculated by the response surface method incorporating multivariate spline interpolation (RSM(S)). Reliability of these solutions was evaluated by a bootstrap (BS) resampling technique. The optimal solutions derived from the Box and Behnken design, D-optimal design, and full factorial design dataset were similar. The distributions of the BS optimal solutions calculated for these datasets were symmetrical. Thus, the accuracy and the reproducibility of the optimal solutions enabled quantitative evaluation based on the deviations of these distributions. However, the distribution of the BS optimal solutions calculated for the central composite design dataset were almost unsymmetrical, and the basic statistic of these distributions could not be conducted. The reason for this problem was considered to be the mixing of the global and local optima. Therefore, self-organizing map (SOM) clustering was applied to identify the global optimal solutions. The BS optimal solutions were divided into 4 clusters by SOM clustering, the accuracy and reproducibility of the optimal solutions in each cluster were quantitatively evaluated, and the cluster containing the global optima was identified. Therefore, SOM clustering was considered to reinforce the BS resampling method for the evaluation of the reliability of optimal solutions irrespective of the dataset style.

  11. Axisymmetric and non-axisymmetric exhaust jet induced effects on a V/STOL vehicle design. Part 3: Experimental technique

    NASA Technical Reports Server (NTRS)

    Schnell, W. C.

    1982-01-01

    The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.

  12. Modern Experimental Techniques in Turbine Engine Testing

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Bencic, T. J.; Braunscheidel, E. P.

    1996-01-01

    The paper describes application of two modern experimental techniques, thin-film thermocouples and pressure sensitive paint, to measurement in turbine engine components. A growing trend of using computational codes in turbomachinery design and development requires experimental techniques to refocus from overall performance testing to acquisition of detailed data on flow and heat transfer physics to validate these codes for design applications. The discussed experimental techniques satisfy this shift in focus. Both techniques are nonintrusive in practical terms. The thin-film thermocouple technique improves accuracy of surface temperature and heat transfer measurements. The pressure sensitive paint technique supplies areal surface pressure data rather than discrete point values only. The paper summarizes our experience with these techniques and suggests improvements to ease the application of these techniques for future turbomachinery research and code verifications.

  13. Experimental design and husbandry.

    PubMed

    Festing, M F

    1997-01-01

    Rodent gerontology experiments should be carefully designed and correctly analyzed so as to provide the maximum amount of information for the minimum amount of work. There are five criteria for a "good" experimental design. These are applicable both to in vivo and in vitro experiments: (1) The experiment should be unbiased so that it is possible to make a true comparison between treatment groups in the knowledge that no one group has a more favorable "environment." (2) The experiment should have high precision so that if there is a true treatment effect there will be a good chance of detecting it. This is obtained by selecting uniform material such as isogenic strains, which are free of pathogenic microorganisms, and by using randomized block experimental designs. It can also be increased by increasing the number of observations. However, increasing the size of the experiment beyond a certain point will only marginally increase precision. (3) The experiment should have a wide range of applicability so it should be designed to explore the sensitivity of the observed experimental treatment effect to other variables such as the strain, sex, diet, husbandry, and age of the animals. With in vitro data, variables such as media composition and incubation times may also be important. The importance of such variables can often be evaluated efficiently using "factorial" experimental designs, without any substantial increase in the overall number of animals. (4) The experiment should be simple so that there is little chance of groups becoming muddled. Generally, formal experimental designs that are planned before the work starts should be used. (5) The experiment should provide the ability to calculate uncertainty. In other words, it should be capable of being statistically analyzed so that the level of confidence in the results can be quantified.

  14. Analysis of a PEMFC durability test under low humidity conditions and stack behaviour modelling using experimental design techniques

    NASA Astrophysics Data System (ADS)

    Wahdame, Bouchra; Candusso, Denis; Harel, Fabien; François, Xavier; Péra, Marie-Cécile; Hissel, Daniel; Kauffmann, Jean-Marie

    A polymer electrolyte membrane fuel cell (PEMFC) stack has been operated under low humidity conditions during 1000 h. The fuel cell characterisation is based both on polarisation curves and electrochemical impedance spectra recorded for various stoichiometry rates, performed regularly throughout the ageing process. Some design of experiment (DoE) techniques, and in particular the response surface methodology (RSM), are employed to analyse the results of the ageing test and to propose some numerical/statistical laws for the modelling of the stack performance degradation. These mathematical relations are used to optimise the fuel cell operating conditions versus ageing time and to get a deeper understanding of the ageing mechanisms. The test results are compared with those obtained from another stack operated in stationary regime at roughly nominal conditions during 1000 h (reference test). The final objective is to ensure for the next fuel cell systems proper operating conditions leading to extended lifetimes.

  15. Model for vaccine design by prediction of B-epitopes of IEDB given perturbations in peptide sequence, in vivo process, experimental techniques, and source or host organisms.

    PubMed

    González-Díaz, Humberto; Pérez-Montoto, Lázaro G; Ubeira, Florencio M

    2014-01-01

    Perturbation methods add variation terms to a known experimental solution of one problem to approach a solution for a related problem without known exact solution. One problem of this type in immunology is the prediction of the possible action of epitope of one peptide after a perturbation or variation in the structure of a known peptide and/or other boundary conditions (host organism, biological process, and experimental assay). However, to the best of our knowledge, there are no reports of general-purpose perturbation models to solve this problem. In a recent work, we introduced a new quantitative structure-property relationship theory for the study of perturbations in complex biomolecular systems. In this work, we developed the first model able to classify more than 200,000 cases of perturbations with accuracy, sensitivity, and specificity >90% both in training and validation series. The perturbations include structural changes in >50000 peptides determined in experimental assays with boundary conditions involving >500 source organisms, >50 host organisms, >10 biological process, and >30 experimental techniques. The model may be useful for the prediction of new epitopes or the optimization of known peptides towards computational vaccine design.

  16. Design and experimental demonstration of low-power CMOS magnetic cell manipulation platform using charge recycling technique

    NASA Astrophysics Data System (ADS)

    Niitsu, Kiichi; Yoshida, Kohei; Nakazato, Kazuo

    2016-03-01

    We present the world’s first charge-recycling-based low-power technique of complementary metal-oxide-semiconductor (CMOS) magnetic cell manipulation. CMOS magnetic cell manipulation associated with magnetic beads is a promissing tool for on-chip biomedical-analysis applications such as drug screening because CMOS can integrate control electronics and electro-chemical sensors. However, the conventional CMOS cell manipulation requires considerable power consumption. In this work, by concatenating multiple unit circuits and recycling electric charge among them, power consumption is reduced by a factor of the number of the concatenated unit circuits (1/N). For verifying the effectiveness, test chip was fabricated in a 0.6-µm CMOS. The chip successfully manipulates magnetic microbeads with achieving 49% power reduction (from 51 to 26.2 mW). Even considering the additional serial resistance of the concatenated inductors, nearly theoretical power reduction effect can be confirmed.

  17. Digital Filter Design Techniques.

    DTIC Science & Technology

    1988-03-01

    McClellan, and the Minimum p - Error IIR Filter Design Method of Deczky. Acceso Fo S CRA&!I DIC TAd [8 13v i . , . a.- II **’. . ’uaJI r -TABLE OF ,CONTENT...NFILT-- FILTE LENGTH C dUYPE-- TYPE OF FILIP C I MULTIPLE PASSbAND/STOPHASZ P11151 C =DIYFEZlNTIATCP C 3 HILBEET DANSFCRZ PELTE2 C NiANDS-- NUEBEi Of

  18. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  19. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  20. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  1. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  2. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  3. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  4. Statistical problems in design technique validation

    SciTech Connect

    Cohen, J.S.

    1980-04-01

    This work is concerned with the statistical validation process for measuring the accuracy of design techniques for solar energy systems. This includes a discussion of the statistical variability inherent in the design and measurement processes and the way in which this variability can dictate the choice of experimental design, choice of data, accuracy of the results, and choice of questions that can be reliably answered in such a study. The approach here is primarily concerned with design procedure validation in the context of the realistic process of system desig, where the discrepancy between measured and predicted results is due to limitations in the mathematical models employed by the procedures and the inaccuracies of input data. A set of guidelines for successful validation methodologies is discussed, and a simplified validation methodology for domestic hot water heaters is presented.

  5. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  6. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  7. Aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Potential theory was examined in detail to meet this objective. Numerical pilot codes were developed for relatively simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body, and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one.

  8. Minimisation of instrumental noise in the acquisition of FT-NIR spectra of bread wheat using experimental design and signal processing techniques.

    PubMed

    Foca, G; Ferrari, C; Sinelli, N; Mariotti, M; Lucisano, M; Caramanico, R; Ulrici, A

    2011-02-01

    Spectral resolution (R) and number of repeated scans (S) have a significant effect on the S/N ratio of Fourier transform-near infrared (FT-NIR) spectra, but the optimal values of these two parameters have to be determined empirically for a specific problem, considering separately both the nature of the analysed matrix and the specific instrumental setup. To achieve this aim, the instrumental noise of replicated FT-NIR spectra of wheat samples was modelled as a function of R and S by means of the Doehlert design. The noise amounts in correspondence to different experimental conditions were estimated by analysing the variance signals derived from replicate measurements with two different signal processing tools, Savitzky-Golay (SG) filtering and fast wavelet transform (FWT), in order to separate the "pure" instrumental noise from other variability sources, which are essentially connected to sample inhomogeneity. Results confirmed that R and S values leading to minimum instrumental noise can vary considerably depending on the type of analysed food matrix and on the different instrumental setups, and helped in the selection of the optimal measuring conditions for the subsequent acquisition of a wide spectral dataset.

  9. Design for reliability of BEoL and 3-D TSV structures – A joint effort of FEA and innovative experimental techniques

    SciTech Connect

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-19

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That’s why the authors pushed combined simulative/experimental approaches to extract the Young’s-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  10. Optimisation of supercritical fluid extraction of indole alkaloids from Catharanthus roseus using experimental design methodology--comparison with other extraction techniques.

    PubMed

    Verma, Arvind; Hartonen, Kari; Riekkola, Marja-Liisa

    2008-01-01

    Response surface modelling, using MODDE 6 software for Design of Experiments and Optimisation, was applied to optimise supercritical fluid extraction (SFE) conditions for the extraction of indole alkaloids from the dried leaves of Catharanthus roseus. The effects of pressure (200-400 bar), temperature (40-80 degrees C), modifier concentration (2.2-6.6 vol%) and dynamic extraction time (20-60 min) on the yield of alkaloids were evaluated. The extracts were analysed by high-performance liquid chromatography and the analytes were identified using ion trap-electrospray ionisation-mass spectrometry. The method was linear for alkaloid concentration in the range 0.18-31 microg/mL. The limits of detection and quantification for catharanthine, vindoline, vinblastine and vincristine were 0.2, 0.15, 0.1 and 0.08 microg/mL and 2.7, 2.0, 1.3 and 1.1 microg/g, respectively. The dry weight content of major alkaloids in the plants were compared using different extraction methods, i.e. SFE, Soxhlet extraction, solid-liquid extraction with sonication and hot water extraction at various temperatures. The extraction techniques were also compared in terms of reproducibility, selectivity and analyte recoveries. Relative standard deviations for the major alkaloids varied from 4.1 to 17.5% in different extraction methods. The best recoveries (100%) for catharanthine were obtained by SFE at 250 bar and 80 degrees C using 6.6 vol% methanol as modifier for 40 min, for vindoline by Soxhlet extraction using dichloromethane in a reflux for 16 h, and for 3',4'-anhydrovinblastine by solid-liquid extraction using a solution of 0.5 m sulphuric acid and methanol (3:1 v/v) in an ultrasonic bath for 3 h.

  11. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  12. Sequential experimental design based generalised ANOVA

    SciTech Connect

    Chakraborty, Souvik Chowdhury, Rajib

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  13. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  14. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  15. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  16. CMOS array design automation techniques

    NASA Technical Reports Server (NTRS)

    Lombardi, T.; Feller, A.

    1976-01-01

    The design considerations and the circuit development for a 4096-bit CMOS SOS ROM chip, the ATL078 are described. Organization of the ATL078 is 512 words by 8 bits. The ROM was designed to be programmable either at the metal mask level or by a directed laser beam after processing. The development of a 4K CMOS SOS ROM fills a void left by available ROM chip types, and makes the design of a totally major high speed system more realizable.

  17. Winglet design using multidisciplinary design optimization techniques

    NASA Astrophysics Data System (ADS)

    Elham, Ali; van Tooren, Michel J. L.

    2014-10-01

    A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.

  18. Experimental Investigation of Centrifugal Compressor Stabilization Techniques

    NASA Technical Reports Server (NTRS)

    Skoch, Gary J.

    2003-01-01

    Results from a series of experiments to investigate techniques for extending the stable flow range of a centrifugal compressor are reported. The research was conducted in a high-speed centrifugal compressor at the NASA Glenn Research Center. The stabilizing effect of steadily flowing air-streams injected into the vaneless region of a vane-island diffuser through the shroud surface is described. Parametric variations of injection angle, injection flow rate, number of injectors, injector spacing, and injection versus bleed were investigated for a range of impeller speeds and tip clearances. Both the compressor discharge and an external source were used for the injection air supply. The stabilizing effect of flow obstructions created by tubes that were inserted into the diffuser vaneless space through the shroud was also investigated. Tube immersion into the vaneless space was varied in the flow obstruction experiments. Results from testing done at impeller design speed and tip clearance are presented. Surge margin improved by 1.7 points using injection air that was supplied from within the compressor. Externally supplied injection air was used to return the compressor to stable operation after being throttled into surge. The tubes, which were capped to prevent mass flux, provided 9.3 points of additional surge margin over the baseline surge margin of 11.7 points.

  19. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  20. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  1. Telecommunications Systems Design Techniques Handbook

    NASA Technical Reports Server (NTRS)

    Edelson, R. E. (Editor)

    1972-01-01

    The Deep Space Network (DSN) increasingly supports deep space missions sponsored and managed by organizations without long experience in DSN design and operation. The document is intended as a textbook for those DSN users inexperienced in the design and specification of a DSN-compatible spacecraft telecommunications system. For experienced DSN users, the document provides a reference source of telecommunication information which summarizes knowledge previously available only in a multitude of sources. Extensive references are quoted for those who wish to explore specific areas more deeply.

  2. Experimental Design for the Evaluation of Detection Techniques of Hidden Corrosion Beneath the Thermal Protective System of the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Kemmerer, Catherine C.; Jacoby, Joseph A.; Lomness, Janice K.; Hintze, Paul E.; Russell, Richard W.

    2007-01-01

    The detection of corrosion beneath Space Shuttle Orbiter thermal protective system is traditionally accomplished by removing the Reusable Surface Insulation tiles and performing a visual inspection of the aluminum substrate and corrosion protection system. This process is time consuming and has the potential to damage high cost tiles. To evaluate non-intrusive NDE methods, a Proof of Concept (PoC) experiment was designed and test panels were manufactured. The objective of the test plan was three-fold: establish the ability to detect corrosion hidden from view by tiles; determine the key factor affecting detectability; roughly quantify the detection threshold. The plan consisted of artificially inducing dimensionally controlled corrosion spots in two panels and rebonding tile over the spots to model the thermal protective system of the orbiter. The corrosion spot diameter ranged from 0.100" to 0.600" inches and the depth ranged from 0.003" to 0.020". One panel consisted of a complete factorial array of corrosion spots with and without tile coverage. The second panel consisted of randomized factorial points replicated and hidden by tile. Conventional methods such as ultrasonics, infrared, eddy current and microwave methods have shortcomings. Ultrasonics and IR cannot sufficiently penetrate the tiles, while eddy current and microwaves have inadequate resolution. As such, the panels were interrogated using Backscatter Radiography and Terahertz Imaging. The terahertz system successfully detected artificially induced corrosion spots under orbiter tile and functional testing is in-work in preparation for implementation.

  3. Light Experimental Supercruiser Conceptual Design

    DTIC Science & Technology

    1976-07-01

    with a definitely related Government procurement operation , the United States Government thereby incurs no responsibility nor any obligation...PERFORMANCE (985-213) 144 77 LANDING PERFORMANCE 145 78 GLOBAL PERSISTENCE (985-213) 146 79 SPECIFIC EXCESS POWER - 1 g (985-213) 146 80 SPECIFIC EXCESS...MODEL 985-213 19.7 FEET ir i. 9.3 FEET fOINT DESIGN WEIGHTS • DESIGN MISSION 13,600 POUNDS • OVERLOAD MISSION 16.780 POUNDS • OPERATING WEIGHT

  4. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  5. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  6. Experimental Design: Review and Comment.

    DTIC Science & Technology

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  7. Experimental Design For Photoresist Characterization

    NASA Astrophysics Data System (ADS)

    Luckock, Larry

    1987-04-01

    In processing a semiconductor product (from discrete devices up to the most complex products produced) we find more photolithographic steps in wafer fabrication than any other kind of process step. Thus, the success of a semiconductor manufacturer hinges on the optimization of their photolithographic processes. Yet, we find few companies that have taken the time to properly characterize this critical operation; they are sitting in the "passenger's seat", waiting to see what will come out, hoping that the yields will improve someday. There is no "black magic" involved in setting up a process at its optimum conditions (i.e. minimum sensitivity to all variables at the same time). This paper gives an example of a real world situation for optimizing a photolithographic process by the use of a properly designed experiment, followed by adequate multidimensional analysis of the data. Basic SPC practices like plotting control charts will not, by themselves, improve yields; the control charts are, however, among the necessary tools used in the determination of the process capability and in the formulation of the problems to be addressed. The example we shall consider is the twofold objective of shifting the process average, while tightening the variance, of polysilicon line widths. This goal was identified from a Pareto analysis of yield-limiting mechanisms, plus inspection of the control charts. A key issue in a characterization of this type of process is the number of interactions between variables; this example rules out two-level full factorial and three-level fractional factorial designs (which cannot detect all of the interactions). We arrive at an experiment with five factors at five levels each. A full factorial design for five factors at three levels would require 3125 wafers. Instead, we will use a design that allows us to run this experiment with only 25 wafers, for a significant reduction in time, materials and manufacturing interruption in order to complete the

  8. Experimental Techniques Applicable to Turbulent Flows.

    DTIC Science & Technology

    1977-01-01

    dent laser , and Stokesplier, e the electron charge, R the load radiation respectively, I ~, is the co— resistance , E the energy of the scattered... measurements of methane , Sca ttering of a Laser Beam ”, A IAA J. 9— using the spontaneous Ramars effect and the 1971., PIBAL Rep. No. 69—46, Nov...developed Laser Raman and of several species of interest in a flame, Laser Doppler techniques may be ideally their individual temperatures as well as

  9. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES

    SciTech Connect

    J. R. KAMM; ET AL

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i. e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. (13), which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  10. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES.

    SciTech Connect

    Kamm, J. R.; Rider, William; Rightley, P. M.; Prestridge, K. P.; Benjamin, R. F.; Vorobieff, P. V.

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i.e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. [13], which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  11. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  12. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  13. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  14. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  15. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  16. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  17. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  18. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  19. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  20. Plant proteomics update (2007-2008): Second-generation proteomic techniques, an appropriate experimental design, and data analysis to fulfill MIAPE standards, increase plant proteome coverage and expand biological knowledge.

    PubMed

    Jorrín-Novo, Jesús V; Maldonado, Ana M; Echevarría-Zomeño, Sira; Valledor, Luis; Castillejo, Mari A; Curto, Miguel; Valero, José; Sghaier, Besma; Donoso, Gabriel; Redondo, Inmaculada

    2009-04-13

    This review is the continuation of three previously published articles [Jorrin JV, Maldonado AM, Castillejo MA. Plant proteome analysis: a 2006 update. Proteomics 2007; 7: 2947-2962; Rossignol M, Peltier JB, Mock HP, Matros A, Maldonado AM, Jorrin JV. Plant proteome analysis: a 2004-2006 update. Proteomics 2006; 6: 5529-5548; Canovas FM, Dumas-Gaudot E, Recorbet G, Jorrin J, Mock HP, Rossignol M. Plant proteome analysis. Proteomics 2004; 4: 285-298] and aims to update the contribution of Proteomics to plant research between 2007 and September 2008 by reviewing most of the papers, which number approximately 250, that appeared in the Plant Proteomics field during that period. Most of the papers published deal with the proteome of Arabidopsis thaliana and rice (Oryza sativa), and focus on profiling organs, tissues, cells or subcellular proteomes, and studying developmental processes and responses to biotic and abiotic stresses using a differential expression strategy. Although the platform based on 2-DE is still the most commonly used, the use of gel-free and second-generation Quantitative Proteomic techniques has increased. Proteomic data are beginning to be validated using complementary -omics or classical biochemical or cellular biology techniques. In addition, appropriate experimental design and statistical analysis are being carried out in accordance with the required Minimal Information about a Proteomic Experiment (MIAPE) standards. As a result, the coverage of the plant cell proteome and the plant biology knowledge is increasing. Compared to human and yeast systems, however, plant biology research has yet to exploit fully the potential of proteomics, in particular its applications to PTMs and Interactomics.

  1. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, Layton Carter

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  2. Machine learning techniques and drug design.

    PubMed

    Gertrudes, J C; Maltarollo, V G; Silva, R A; Oliveira, P R; Honório, K M; da Silva, A B F

    2012-01-01

    The interest in the application of machine learning techniques (MLT) as drug design tools is growing in the last decades. The reason for this is related to the fact that the drug design is very complex and requires the use of hybrid techniques. A brief review of some MLT such as self-organizing maps, multilayer perceptron, bayesian neural networks, counter-propagation neural network and support vector machines is described in this paper. A comparison between the performance of the described methods and some classical statistical methods (such as partial least squares and multiple linear regression) shows that MLT have significant advantages. Nowadays, the number of studies in medicinal chemistry that employ these techniques has considerably increased, in particular the use of support vector machines. The state of the art and the future trends of MLT applications encompass the use of these techniques to construct more reliable QSAR models. The models obtained from MLT can be used in virtual screening studies as well as filters to develop/discovery new chemicals. An important challenge in the drug design field is the prediction of pharmacokinetic and toxicity properties, which can avoid failures in the clinical phases. Therefore, this review provides a critical point of view on the main MLT and shows their potential ability as a valuable tool in drug design.

  3. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  4. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  5. Systematic design assessment techniques for solar buildings

    NASA Astrophysics Data System (ADS)

    Page, J. K.; Rodgers, G. G.; Souster, C. G.

    1980-02-01

    The paper describes the various approaches developed for the detailed modelling of the relevant climatic input variables for systematic design assessments for solar housing techniques. A report is made of the techniques developed to generate systematic short wave radiation data for vertical and inclined surfaces for different types of weather. The analysis is based on different types of days, such as sunny, average and overcast. Work on the accurate estimation of the magnitude of the associated weather variables affecting heat transfer in the external environment is also reported, covering air temperature, wind speed and long wave radiation exchanges.

  6. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  7. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  8. Multi-Variable Analysis and Design Techniques.

    DTIC Science & Technology

    1981-09-01

    by A.G.J.MacFarlane 2 MULTIVARIABLE DESIGN TECHNIQUES BASED ON SINGULAR VALUE GENERALIZATIONS OF CLASSICAL CONTROL by J.C. Doyle 3 LIMITATIONS ON...prototypes to complex mathematical representations. All of these assemblages of information or information generators can loosely be termed "models...non linearities (e.g., control saturation) I neglect of high frequency dynamics. T hese approximations are well understood and in general their impact

  9. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  10. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  11. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  12. Irradiation Design for an Experimental Murine Model

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  13. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  14. Human Factors Experimental Design and Analysis Reference

    DTIC Science & Technology

    2007-07-01

    and R2Adj – PRESS Statistic – Mallows C(p) A linear regression model that includes all predictors investigated may not be the best model in terms of...as the Adjusted Coefficient of Determination, R2Adj, the PRESS statistic, and Mallows C(p) value. Human Factors Experimental Design and Analysis...equations with highest R2 using R2Adj, PRESS, and Mallows C(p) • Evaluation – Cumbersome as number of X’s increase 10 X’s = (210-1) = 1,023 Regression

  15. Teaching Experimental Design Using an Exercise in Protein Fractionation

    NASA Astrophysics Data System (ADS)

    Loke, J. P.; Hancock, D.; Johnston, J. M.; Dimauro, J.; Denyer, G. S.

    2001-11-01

    This experiment, suitable for introductory biochemistry courses, presents the techniques of protein purification as a problem-solving exercise. Students must identify and purify three proteins from an unknown mixture using the techniques of gel filtration, ion exchange chromatography, UV and visible spectrophotometry, and gel electrophoresis. To aid construction of a strategy, they are given some information about each of the possible proteins: source, function, molecular weight, pI, and UV and visible spectra. From this they must design their own purification protocols and carry out the experimental work. To develop students' computer skills, the experimental results and the logic used in the identification are presented as a short computer-generated report.

  16. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  17. Techniques for Reducing Gun Blast Noise Levels: An Experimental Study

    DTIC Science & Technology

    1981-04-01

    gun muzzle blast noise level were in- vestigated experimentally to determine potential effectiveness and utility for existing major-caliber guns...impact on training and testing operations was to be minimized. Most of the noise reduction techniques that were investigated involve the use of some type ...shock noise level at the earth’s surface varies according to a complicated dependence upon projectile trajectory, projectile speed along the trajectory

  18. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  19. Automatic Molecular Design using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

  20. Experimental techniques for the investigation of coupled phenomena in geomaterials

    NASA Astrophysics Data System (ADS)

    Romero, E.

    2010-06-01

    The paper describes different experimental setups and techniques used to investigate coupled stress, fluid (water and air) and temperature effects on geomaterials. Two temperature controlled cells are described: a) a constant volume cell in which thermal pulses can be performed under controlled hydraulic conditions to induce pore pressure build-up during quasi-undrained heating and later dissipation; and b) an axisymmetric triaxial cell with controlled suction and temperature to perform drained heating and cooling paths under partially saturated states. The paper also presents an experimental setup to perform controlled flow-rate gas injection experiments on argillaceous rocks using a high-pressure triaxial cell. This cell is used to study gas migration phenomena and the conditions under which gas breakthrough processes occur. Selected test results are presented, which show the capabilities of the different experimental setups described to capture main behavioural features.

  1. Natural Stream Channel Design Techniques and Review

    EPA Pesticide Factsheets

    Need for a Review Checklist: Stream restoration problems include; design complexity, many different design methodologies, inconsistency in design deliverables, communication difficulties, many failed projects

  2. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2012-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each configuration is associated to one value of the objective function that characterizes the quality of this particular design. Here, we describe the method used to optimize an experimental design. Then, we validate this new technique and explore the different issues of experimental design by simulating a CSEM survey with a realistic 1D layered model.

  3. Nonlinear potential analysis techniques for supersonic-hypersonic aerodynamic design

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes.

  4. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  5. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  6. Formulation of aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.

  7. Experimental Design for Combinatorial and High Throughput Materials Development

    NASA Astrophysics Data System (ADS)

    Cawse, James N.

    2002-12-01

    In the past decade, combinatorial and high throughput experimental methods have revolutionized the pharmaceutical industry, allowing researchers to conduct more experiments in a week than was previously possible in a year. Now high throughput experimentation is rapidly spreading from its origins in the pharmaceutical world to larger industrial research establishments such as GE and DuPont, and even to smaller companies and universities. Consequently, researchers need to know the kinds of problems, desired outcomes, and appropriate patterns for these new strategies. Editor James Cawse's far-reaching study identifies and applies, with specific examples, these important new principles and techniques. Experimental Design for Combinatorial and High Throughput Materials Development progresses from methods that are now standard, such as gradient arrays, to mathematical developments that are breaking new ground. The former will be particularly useful to researchers entering the field, while the latter should inspire and challenge advanced practitioners. The book's contents are contributed by leading researchers in their respective fields. Chapters include: -High Throughput Synthetic Approaches for the Investigation of Inorganic Phase Space -Combinatorial Mapping of Polymer Blends Phase Behavior -Split-Plot Designs -Artificial Neural Networks in Catalyst Development -The Monte Carlo Approach to Library Design and Redesign This book also contains over 200 useful charts and drawings. Industrial chemists, chemical engineers, materials scientists, and physicists working in combinatorial and high throughput chemistry will find James Cawse's study to be an invaluable resource.

  8. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  9. Plant metabolomics: from experimental design to knowledge extraction.

    PubMed

    Rai, Amit; Umashankar, Shivshankar; Swarup, Sanjay

    2013-01-01

    Metabolomics is one of the most recent additions to the functional genomics approaches. It involves the use of analytical chemistry techniques to provide high-density data of metabolic profiles. Data is then analyzed using advanced statistics and databases to extract biological information, thus providing the metabolic phenotype of an organism. Large variety of metabolites produced by plants through the complex metabolic networks and their dynamic changes in response to various perturbations can be studied using metabolomics. Here, we describe the basic features of plant metabolic diversity and analytical methods to describe this diversity, which includes experimental workflows starting from experimental design, sample preparation, hardware and software choices, combined with knowledge extraction methods. Finally, we describe a scenario for using these workflows to identify differential metabolites and their pathways from complex biological samples.

  10. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  11. Implementation of high throughput experimentation techniques for kinetic reaction testing.

    PubMed

    Nagy, Anton J

    2012-02-01

    Successful implementation of High throughput Experimentation (EE) tools has resulted in their increased acceptance as essential tools in chemical, petrochemical and polymer R&D laboratories. This article provides a number of concrete examples of EE systems, which have been designed and successfully implemented in studies, which focus on deriving reaction kinetic data. The implementation of high throughput EE tools for performing kinetic studies of both catalytic and non-catalytic systems results in a significantly faster acquisition of high-quality kinetic modeling data, required to quantitatively predict the behavior of complex, multistep reactions.

  12. A Short Guide to Experimental Design and Analysis for Engineers

    DTIC Science & Technology

    2014-04-01

    measures and the single group design. The relevant statistical techniques are also discussed to help identify key quantitative methods for data...authors gain a basic understanding of design, measurement and statistical analysis to support military experiments. RELEASE LIMITATION Approved...designs including the simple experiment, matched-pairs, repeated- measures and the single group design. The relevant statistical techniques are also

  13. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  14. Comparison of the experimental aerodynamic characteristics of theoretically and experimentally designed supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1974-01-01

    A lifting airfoil theoretically designed for shockless supercritical flow utilizing a complex hodograph method has been evaluated in the Langley 8-foot transonic pressure tunnel at design and off-design conditions. The experimental results are presented and compared with those of an experimentally designed supercritical airfoil which were obtained in the same tunnel.

  15. Circular machine design techniques and tools

    SciTech Connect

    Servranckx, R.V.; Brown, K.L.

    1986-04-01

    Some of the basic optics principles involved in the design of circular accelerators such as Alternating Gradient Synchrotrons, Storage and Collision Rings, and Pulse Stretcher Rings are outlined. Typical problems facing a designer are defined, and the main references and computational tools are reviewed that are presently available. Two particular classes of problems that occur typically in accelerator design are listed - global value problems, which affect the control of parameters which are characteristic of the complete closed circular machine, and local value problems. Basic mathematical formulae are given that are considered useful for a first draft of a design. The basic optics building blocks that can be used to formulate an initial machine design are introduced, giving only the elementary properties and transfer matrices only in one transverse plane. Solutions are presented for some first-order and second-order design problems. (LEW)

  16. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  17. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  18. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  19. A Novel Experimental Technique to Simulate Pillar Burst in Laboratory

    NASA Astrophysics Data System (ADS)

    He, M. C.; Zhao, F.; Cai, M.; Du, S.

    2015-09-01

    Pillar burst is one type of rockburst that occurs in underground mines. Simulating the stress change and obtaining insight into the pillar burst phenomenon under laboratory conditions are essential for studying the rock behavior during pillar burst in situ. To study the failure mechanism, a novel experimental technique was proposed and a series of tests were conducted on some granite specimens using a true-triaxial strainburst test system. Acoustic emission (AE) sensors were used to monitor the rock fracturing process. The damage evolution process was investigated using techniques such as macro and micro fracture characteristics observation, AE energy evolution, and b value analysis and fractal dimension analysis of cracks on fragments. The obtained results indicate that stepped loading and unloading simulated the pillar burst phenomenon well. Four deformation stages are divided as initial stress state, unloading step I, unloading step II, and final burst. It is observed that AE energy has a sharp increase at the initial stress state, accumulates slowly at unloading steps I and II, and increases dramatically at peak stress. Meanwhile, the mean b values fluctuate around 3.50 for the first three deformation stages and then decrease to 2.86 at the final stage, indicating the generation of a large amount of macro fractures. Before the test, the fractal dimension values are discrete and mainly vary between 1.10 and 1.25, whereas after failure the values concentrate around 1.25-1.35.

  20. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  1. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  2. Nonlinear potential analysis techniques for supersonic-hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    Clever, W. C.; Shankar, V.

    1983-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical pilot codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one. Case computational times of a minute were achieved for practical aircraft arrangements.

  3. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Yudilevitch, Gil; Levine, William S.

    1994-01-01

    Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.

  4. Evolutionary Technique for Designing Optimized Arrays

    NASA Astrophysics Data System (ADS)

    Villazón, J.; Ibañez, A.

    2011-06-01

    Many ultrasonic inspection applications in the industry could benefit from the use of phased array distributions specifically designed for them. Some common design requirements are: to adapt the shape of the array to that of the part to be inspected, to use large apertures for increasing lateral resolution, to find a layout of elements that avoids artifacts produced by lateral and/or grating lobes, to maintain the total number of independent elements (and the number of control channels) as low as possible to reduce complexity and cost of the inspection system. Recent advances in transducer technology have made possible to design and build arrays whit non-regular layout of elements. In this paper we propose to use Evolutionary Algorithms to find layouts of ultrasonic arrays (whether 1D or 2D array) that approach a set of specified beampattern characteristics using a low number of elements.

  5. Experimental measurements of the thermal conductivity of ash deposits: Part 1. Measurement technique

    SciTech Connect

    A. L. Robinson; S. G. Buckley; N. Yang; L. L. Baxter

    2000-04-01

    This paper describes a technique developed to make in situ, time-resolved measurements of the effective thermal conductivity of ash deposits formed under conditions that closely replicate those found in the convective pass of a commercial boiler. Since ash deposit thermal conductivity is thought to be strongly dependent on deposit microstructure, the technique is designed to minimize the disturbance of the natural deposit microstructure. Traditional techniques for measuring deposit thermal conductivity generally do not preserve the sample microstructure. Experiments are described that demonstrate the technique, quantify experimental uncertainty, and determine the thermal conductivity of highly porous, unsintered deposits. The average measured conductivity of loose, unsintered deposits is 0.14 {+-} 0.03 W/(m K), approximately midway between rational theoretical limits for deposit thermal conductivity.

  6. Verification of Experimental Techniques for Flow Surface Determination

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.

    1996-01-01

    The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).

  7. Techniques for Molecular Imaging Probe Design

    PubMed Central

    Reynolds, Fred; Kelly, Kimberly A.

    2011-01-01

    Molecular imaging allows clinicians to visualize disease specific molecules, thereby providing relevant information in the diagnosis and treatment of patients. With advances in genomics and proteomics and underlying mechanisms of disease pathology, the number of targets identified has significantly outpaced the number of developed molecular imaging probes. There has been a concerted effort to bridge this gap with multidisciplinary efforts in chemistry, proteomics, physics, material science, and biology; all essential to progress in molecular imaging probe development. In this review, we will discuss target selection, screening techniques and probe optimization with the aim of developing clinically relevant molecularly targeted imaging agents. PMID:22201532

  8. Techniques for molecular imaging probe design.

    PubMed

    Reynolds, Fred; Kelly, Kimberly A

    2011-12-01

    Molecular imaging allows clinicians to visualize disease-specific molecules, thereby providing relevant information in the diagnosis and treatment of patients. With advances in genomics and proteomics and underlying mechanisms of disease pathology, the number of targets identified has significantly outpaced the number of developed molecular imaging probes. There has been a concerted effort to bridge this gap with multidisciplinary efforts in chemistry, proteomics, physics, material science, and biology--all essential to progress in molecular imaging probe development. In this review, we discuss target selection, screening techniques, and probe optimization with the aim of developing clinically relevant molecularly targeted imaging agents.

  9. A new acceleration technique for the design of fibre gratings.

    PubMed

    Carvalho, J C C; Sousa, M J; Sales Júnior, C S; Costa, J C W A; Francês, C R L; Segatto, M E V

    2006-10-30

    In this paper we propose a novel acceleration technique for the design of fibre gratings based on Genetic Algorithm (GA). It is shown that with an appropriate reformulation of the wavelength sampling scheme it is possible to design high quality optical filters with low computational effort. Our results will show that the proposed technique can reduce significantly the GA's processing time.

  10. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  11. Engineering Design Handbook: Maintenance Engineering Techniques

    DTIC Science & Technology

    1975-06-30

    housings so that the center of gravity is as close as possible to the mount- ing surface. b. Avoid any large, flat housing wall that acts as a diaphragm ...and amplifies vibration. Reinforce housing walls by using internal or external fins or ribs, or by adding mounting points, to minimize vibration...not be bent or unbent sharply when connected or disconnected. b. Design cables or lines that must be routed through walls or bulkheads for easy in

  12. Overview of Passive Solar Design Techniques.

    DTIC Science & Technology

    1982-09-01

    the "market acceptance" of the passive solar designs. In mast cases, a passive system is integrated into the architecture of a building, which...increases discomfort by decreasing the rate of moisture evaporation from the skin. The Bioclimatic Chart developed by V. Olgyay provides a convenient way...outdoors and, therefore, not previously cir- culated through the system. passive solar system: An assembly of natural and architectural components

  13. Advanced Computational Techniques for Power Tube Design.

    DTIC Science & Technology

    1986-07-01

    fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

  14. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  15. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  16. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  17. Teaching Experimental Design to Elementary School Pupils in Greece

    ERIC Educational Resources Information Center

    Karampelas, Konstantinos

    2016-01-01

    This research is a study about the possibility to promote experimental design skills to elementary school pupils. Experimental design and the experiment process are foundational elements in current approaches to Science Teaching, as they provide learners with profound understanding about knowledge construction and science inquiry. The research was…

  18. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  19. Classical controller design techniques for fractional order case.

    PubMed

    Yeroglu, Celaleddin; Tan, Nusret

    2011-07-01

    This paper presents some classical controller design techniques for the fractional order case. New robust lag, lag-lead, PI controller design methods for control systems with a fractional order interval transfer function (FOITF) are proposed using classical design methods with the Bode envelopes of the FOITF. These controllers satisfy the robust performance specifications of the fractional order interval plant. In order to design a classical PID controller, an optimization technique based on fractional order reference model is used. PID controller parameters are obtained using the least squares optimization method. Different PID controller parameters that satisfy stability have been obtained for the same plant.

  20. Application of experimental and numerical simulation techniques to microscale devices

    NASA Astrophysics Data System (ADS)

    Somashekar, Vishwanath

    Two of the areas that have become relevant recently are the areas of mixing in micro-scale devices, and manufacturing of functional nanoparticles. MicroPIV experiments were performed on two different mixers, one a wide microchannel with the surface grooves, in the laminar regime, and the other, a confined impinging jets reactor, in the laminar and turbulent regimes. In the wide microchannel with surface grooves, microPIV data were collected at the interface and the midplane at the Reynolds numbers of 0.08, 0.8, and 8. The experiments were performed on three internal angles of the chevrons, namely 135°, 90°, and 45°. The normalized transverse velocity generated in the midplane due to the presence of the grooves, is the strongest for the internal angle of 135°, and in that, the normalized transverse velocity is maximum at the Reynolds numbers of 0.08 and 0.8. MicroPIV experiments were performed in a confined impinging jets reactors at Reynolds numbers of 200, 1000, and 1500. The data was collected in the midplane, and turbulent statistics were further computed. The high velocity jets impinge along the centerline of the reactor. Upon impinging, part of the fluid turns towards the top wall and the majority of it turn towards the outlet. This high velocity impingement causes and unstable zone called the impingement zone, which moves about the centerline line, causing the jets to flap back and forth. Spatial correlations were computed to get an estimate of the size of the coherent structures. Large eddy simulation was performed on the CIJR for the Reynolds numbers of 1000 and 1500, using OpenFOAM. The Reynolds number is based on the inlet jet hydraulic diameter. Excellent agreement was found with the experimental and simulation data. Turbulent reactive mixing in a rectangular microscale confined impinging-jets reactor (CIJR) was investigated using the pH indicator phenolphthalein in this study for three different jet Reynolds numbers of 25, 1000 and 1500. Laminar

  1. Hierarchical aggregation for information visualization: overview, techniques, and design guidelines.

    PubMed

    Elmqvist, Niklas; Fekete, Jean-Daniel

    2010-01-01

    We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.

  2. An investigation of radiometer design using digital processing techniques

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.

    1981-01-01

    The use of digital signal processing techniques in Dicke switching radiometer design was investigated. The general approach was to develop an analytical model of the existing analog radiometer and identify factors which adversly affect its performance. A digital processor was then proposed to verify the feasibility of using digital techniques to minimize these adverse effects and improve the radiometer performance. Analysis and preliminary test results comparing the digital and analog processing approaches in radiometers design were analyzed.

  3. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall.

  4. Intracanal placement of calcium hydroxide: a comparison of specially designed paste carrier technique with other techniques

    PubMed Central

    2013-01-01

    Background This study compared the effectiveness of a Specially Designed Paste Carrier technique with the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. Methods Three groups, each containing 15 single-rooted human anterior teeth were prepared using standardized Mtwo rotary instruments to a master apical file size 40 with 0.04 taper. Each group was filled with calcium hydroxide paste using: Syringe and #25 finger spreader (Group 1); Syringe and #4 rotary Lentulo spiral (Group 2), Specially Designed Paste Carrier (Group 3). Using pre-filling and post-filling radiographs in buccolingual and mesiodistal planes, the radiodensities at 1 mm, 3 mm, 5 mm, and 7 mm from the apical foramen were analyzed by ANOVA and Bonferroni post hoc tests. Results Overall, The Specially Designed Paste Carrier technique showed a statistically significantly higher mean radiodensity than the two other compared techniques. No significant difference was detected between the Syringe-Lentulo spiral and the Syringe-Spreader techniques. Conclusion The Specially Designed Paste Carrier technique was more effective than the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. PMID:24098931

  5. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.

  6. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  7. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  8. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  9. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  10. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  11. Designing modulators of monoamine transporters using virtual screening techniques

    PubMed Central

    Mortensen, Ole V.; Kortagere, Sandhya

    2015-01-01

    The plasma-membrane monoamine transporters (MATs), including the serotonin (SERT), norepinephrine (NET) and dopamine (DAT) transporters, serve a pivotal role in limiting monoamine-mediated neurotransmission through the reuptake of their respective monoamine neurotransmitters. The transporters are the main target of clinically used psychostimulants and antidepressants. Despite the availability of several potent and selective MAT substrates and inhibitors the continuing need for therapeutic drugs to treat brain disorders involving aberrant monoamine signaling provides a compelling reason to identify novel ways of targeting and modulating the MATs. Designing novel modulators of MAT function have been limited by the lack of three dimensional structure information of the individual MATs. However, crystal structures of LeuT, a bacterial homolog of MATs, in a substrate-bound occluded, substrate-free outward-open, and an apo inward-open state and also with competitive and non-competitive inhibitors have been determined. In addition, several structures of the Drosophila DAT have also been resolved. Together with computational modeling and experimental data gathered over the past decade, these structures have dramatically advanced our understanding of several aspects of SERT, NET, and DAT transporter function, including some of the molecular determinants of ligand interaction at orthosteric substrate and inhibitor binding pockets. In addition progress has been made in the understanding of how allosteric modulation of MAT function can be achieved. Here we will review all the efforts up to date that has been made through computational approaches employing structural models of MATs to design small molecule modulators to the orthosteric and allosteric sites using virtual screening techniques. PMID:26483692

  12. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  13. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  14. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  15. Experimental comparison of manufacturing techniques of toughened and nanoreinforced polyamides

    NASA Astrophysics Data System (ADS)

    Siengchin, S.; Bergmann, C.; Dangtungee, R.

    2011-11-01

    Composites consisting of polyamide-6 (PA-6), nitrile rubber (NBR), and sodium fluorohectorite (FH) or alumina silicate (Sungloss; SG) were produced by different techniques with latex precompounding. Their tensile and thermomechanical properties were determined by using tensile tests and a dynamic-mechanical analysis, performed at various temperatures. The PA-6/NBR composite systems produced by the direct melt compounding outperformed those obtained by using the masterbatch technique with respect to the strength and ductility, but the latter ones had a higher storage modulus.

  16. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  17. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  18. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  19. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  20. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  1. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  2. Experimental techniques for cross-section measurements. [for electron impacts

    NASA Technical Reports Server (NTRS)

    Trajmar, S.; Register, D. F.

    1984-01-01

    Attention is given to electron collision phenomena which can be studied under single-collision conditions at low and intermediate electron impact energies, ranging from threshold to a few hundred eV, using gas phase molecular targets. Several of the experimental methods discussed were first developed and applied to atoms, but are equally applicable to molecules with minor modifications in the interpretation of the data, due to the greater complexity of molecular systems.

  3. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  4. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  5. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  6. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  7. Inverse boundary-layer technique for airfoil design

    NASA Technical Reports Server (NTRS)

    Henderson, M. L.

    1979-01-01

    A description is presented of a technique for the optimization of airfoil pressure distributions using an interactive inverse boundary-layer program. This program allows the user to determine quickly a near-optimum subsonic pressure distribution which meets his requirements for lift, drag, and pitching moment at the desired flow conditions. The method employs an inverse turbulent boundary-layer scheme for definition of the turbulent recovery portion of the pressure distribution. Two levels of pressure-distribution architecture are used - a simple roof top for preliminary studies and a more complex four-region architecture for a more refined design. A technique is employed to avoid the specification of pressure distributions which result in unrealistic airfoils, that is, those with negative thickness. The program allows rapid evaluation of a designed pressure distribution off-design in Reynolds number, transition location, and angle of attack, and will compute an airfoil contour for the designed pressure distribution using linear theory.

  8. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  9. The Photoshop Smile Design technique (part 1): digital dental photography.

    PubMed

    McLaren, Edward A; Garber, David A; Figueira, Johan

    2013-01-01

    The proliferation of digital photography and imaging devices is enhancing clinicians' ability to visually document patients' intraoral conditions. By understanding the elements of esthetics and learning how to incorporate technology applications into clinical dentistry, clinicians can predictably plan smile design and communicate anticipated results to patients and ceramists alike. This article discusses camera, lens, and flash selection and setup, and how to execute specific types of images using the Adobe Photoshop Smile Design (PSD) technique.

  10. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    Konampurath George, Aby; Singh, Harpreet

    2016-12-02

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  11. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    George, Aby K; Singh, Harpreet

    2016-12-01

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  12. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  13. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  14. Reliability of single sample experimental designs: comfortable effort level.

    PubMed

    Brown, W S; Morris, R J; DeGroot, T; Murry, T

    1998-12-01

    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  15. An experimental modal testing/identification technique for personal computers

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph

    1990-01-01

    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  16. Low Cost Gas Turbine Off-Design Prediction Technique

    NASA Astrophysics Data System (ADS)

    Martinjako, Jeremy

    This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.

  17. Experimental techniques for in-ring reaction experiments

    NASA Astrophysics Data System (ADS)

    Mutterer, M.; Egelhof, P.; Eremin, V.; Ilieva, S.; Kalantar-Nayestanaki, N.; Kiselev, O.; Kollmus, H.; Kröll, T.; Kuilman, M.; Chung, L. X.; Najafi, M. A.; Popp, U.; Rigollet, C.; Roy, S.; von Schmid, M.; Streicher, B.; Träger, M.; Yue, K.; Zamora, J. C.; the EXL Collaboration

    2015-11-01

    As a first step of the EXL project scheduled for the New Experimental Storage Ring at FAIR a precursor experiment (E105) was performed at the ESR at GSI. For this experiment, an innovative differential pumping concept, originally proposed for the EXL recoil detector ESPA, was successfully applied. The implementation and essential features of this novel technical concept will be discussed, as well as details on the detectors and the infrastructure around the internal gas-jet target. With 56Ni(p, p)56Ni elastic scattering at 400 MeV u-1, a nuclear reaction experiment with stored radioactive beams was realized for the first time. Finally, perspectives for a next-generation EXL-type setup are briefly discussed.

  18. Dynamic Measurement of the J Integral in Ductile Metals: Comparison of Experimental and Numerical Techniques

    DTIC Science & Technology

    1988-08-01

    proven experimental techniques for measruing J under static loading, few proven experimental techniques exist for measurement of the time history of J...Freund[9], who estimate jd by measuring the tran- sient load displacement records and by using the quasi-static formula for deeply notched round bars...HY-100 steel, loaded by a projectile, are compared to experimental measurements performed by means of the interferometric strain- displacement gauge

  19. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  20. Experimental and Computational Techniques in Soft Condensed Matter Physics

    NASA Astrophysics Data System (ADS)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  1. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER.

  2. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  3. Thermal-hydraulic design issues and analysis for the ITER (International Thermonuclear Experimental Reactor) divertor

    SciTech Connect

    Koski, J.A.; Watson, R.D. ); Hassanien, A.M. ); Goranson, P.L. . Fusion Engineering Design Center); Salmonson, J.C. . Special Projects)

    1990-01-01

    Critical Heat Flux (CHF), also called burnout, is one of the major design limits for water-cooled divertors in tokamaks. Another important design issue is the correct thermal modeling of the divertor plate geometry where heat is applied to only one side of the plate and highly subcooled flow boiling in internal passages is used for heat removal. This paper discusses analytical techniques developed to address these design issues, and the experimental evidence gathered in support of the approach. Typical water-cooled divertor designs for the International Thermonuclear Experimental Reactor (ITER) are analyzed, and design margins estimated. Peaking of the heat flux at the tube-water boundary is shown to be an important issue, and design concerns which could lead to imposing large design safety margins are identified. The use of flow enhancement techniques such as internal twisted tapes and fins are discussed, and some estimates of the gains in the design margin are presented. Finally, unresolved issues and concerns regarding hydraulic design of divertors are summarized, and some experiments which could help the ITER final design process identified. 23 refs., 10 figs.

  4. Design of high speed proprotors using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Mccarthy, Thomas R.; Chattopadhyay, Aditi

    1993-01-01

    A multidisciplinary optimization procedure is developed for the design of high speed proprotors. The objectives are to simultaneously maximize the propulsive efficiency in high speed cruise without sacrificing the rotor figure of merit in hover. Since the problem involves multiple design objectives, multiobjective function formulation techniques are used. A derailed two-celled isotropic box beam is used to model the load carrying member within the rotor blade. Constraints are imposed on rotor blade aeroelastic stability in cruise, the first natural frequency in hover and total blade weight. Both aerodynamic and structural design variables are used. The results obtained using both techniques are compared to the reference rotor and show significant aerodynamic performance improvements without sacrificing dynamic and aeroelastic stability requirements.

  5. The experimental technique of the G^0 measurement.

    NASA Astrophysics Data System (ADS)

    Roche, Julie

    2001-10-01

    The G^0 experiment(JLab experiment E00-006, D.H. Beck, spokesperson.) will measure the parity-violating asymmetries in elastic electron-nucleon scattering. The experiment will be performed in Hall C at Jefferson Lab using a dedicated apparatus. In order to achieve the required statistical accuracy of the measurements, a super-conducting toroidal spectrometer, with azimuthally symmetric angular acceptance, and an associated cryogenic target have been constructed. The Focal Plane Detectors are arranged in 8 arrays of 16 arc-shaped scintillator pairs providing a fast signal compatible with the high rates of this counting experiment. For the forward angle measurement, custom built electronics will provide a time-of-flight measurement discriminating elastic recoil protons from pions and inelastic protons. For the backward angle measurements, additional scintillators and Cerenkov counters will provide separation of the elastic electrons from inelastic electrons and pions. An overview of the experimental apparatus and method for the G^0 measurement will be presented.

  6. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  7. Stem cell clonality -- theoretical concepts, experimental techniques, and clinical challenges.

    PubMed

    Glauche, Ingmar; Bystrykh, Leonid; Eaves, Connie; Roeder, Ingo

    2013-04-01

    Here we report highlights of discussions and results presented at an International Workshop on Concepts and Models of Stem Cell Organization held on July 16th and 17th, 2012 in Dresden, Germany. The goal of the workshop was to undertake a systematic survey of state-of-the-art methods and results of clonality studies of tissue regeneration and maintenance with a particular emphasis on the hematopoietic system. The meeting was the 6th in a series of similar conceptual workshops, termed StemCellMathLab,(2) all of which have had the general objective of using an interdisciplinary approach to discuss specific aspects of stem cell biology. The StemCellMathLab 2012, which was jointly organized by the Institute for Medical Informatics and Biometry, Medical Faculty Carl Gustav Carus, Dresden University of Technology and the Institute for Medical Informatics, Statistics and Epidemiology, Medical Faculty, University of Leipzig, brought together 32 scientists from 8 countries, with scientific backgrounds in medicine, cell biology, virology, physics, computer sciences, bioinformatics and mathematics. The workshop focused on the following questions: (1) How heterogeneous are stem cells and their progeny? and (2) What are the characteristic differences in the clonal dynamics between physiological and pathophysiological situations? In discussing these questions, particular emphasis was placed on (a) the methods for quantifying clones and their dynamics in experimental and clinical settings and (b) general concepts and models for their description. In this workshop summary we start with an introduction to the current state of clonality research and a proposal for clearly defined terminology. Major topics of discussion include clonal heterogeneity in unperturbed tissues, clonal dynamics due to physiological and pathophysiological pressures and conceptual and technical issues of clone quantification. We conclude that an interactive cross-disciplinary approach to research in this

  8. Translocations of amphibians: Proven management method or experimental technique

    USGS Publications Warehouse

    Seigel, Richard A.; Dodd, C. Kenneth

    2002-01-01

    In an otherwise excellent review of metapopulation dynamics in amphibians, Marsh and Trenham (2001) make the following provocative statements (emphasis added): If isolation effects occur primarily in highly disturbed habitats, species translocations may be necessary to promote local and regional population persistence. Because most amphibians lack parental care, they areprime candidates for egg and larval translocations. Indeed, translocations have already proven successful for several species of amphibians. Where populations are severely isolated, translocations into extinct subpopulations may be the best strategy to promote regional population persistence. We take issue with these statements for a number of reasons. First, the authors fail to cite much of the relevant literature on species translocations in general and for amphibians in particular. Second, to those unfamiliar with current research in amphibian conservation biology, these comments might suggest that translocations are a proven management method. This is not the case, at least in most instances where translocations have been evaluated for an appropriate period of time. Finally, the authors fail to point out some of the negative aspects of species translocation as a management method. We realize that Marsh and Trenham's paper was not concerned primarily with translocations. However, because Marsh and Trenham (2001) made specific recommendations for conservation planners and managers (many of whom are not herpetologists or may not be familiar with the pertinent literature on amphibians), we believe that it is essential to point out that not all amphibian biologists are as comfortable with translocations as these authors appear to be. We especially urge caution about advocating potentially unproven techniques without a thorough review of available options.

  9. Development of a complex experimental system for controlled ecological life support technique

    NASA Astrophysics Data System (ADS)

    Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.

    A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique

  10. Normalization and experimental design for ChIP-chip data

    PubMed Central

    Peng, Shouyong; Alekseyenko, Artyom A; Larschan, Erica; Kuroda, Mitzi I; Park, Peter J

    2007-01-01

    Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip) has been widely used to investigate the DNA binding sites for a variety of proteins on a genome-wide scale. However, several issues in the processing and analysis of ChIP-chip data have not been resolved fully, including the effect of background (mock control) subtraction and normalization within and across arrays. Results The binding profiles of Drosophila male-specific lethal (MSL) complex on a tiling array provide a unique opportunity for investigating these topics, as it is known to bind on the X chromosome but not on the autosomes. These large bound and control regions on the same array allow clear evaluation of analytical methods. We introduce a novel normalization scheme specifically designed for ChIP-chip data from dual-channel arrays and demonstrate that this step is critical for correcting systematic dye-bias that may exist in the data. Subtraction of the mock (non-specific antibody or no antibody) control data is generally needed to eliminate the bias, but appropriate normalization obviates the need for mock experiments and increases the correlation among replicates. The idea underlying the normalization can be used subsequently to estimate the background noise level in each array for normalization across arrays. We demonstrate the effectiveness of the methods with the MSL complex binding data and other publicly available data. Conclusion Proper normalization is essential for ChIP-chip experiments. The proposed normalization technique can correct systematic errors and compensate for the lack of mock control data, thus reducing the experimental cost and producing more accurate results. PMID:17592629

  11. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  12. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  13. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  14. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  15. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  16. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  17. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  18. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  19. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  20. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  1. Experimental techniques for evaluating steady-state jet engine performance in an altitude facility

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Young, C. Y.; Antl, R. J.

    1971-01-01

    Jet engine calibration tests were conducted in an altitude facility using a contoured bellmouth inlet duct, four fixed-area water-cooled exhaust nozzles, and an accurately calibrated thrust measuring system. Accurate determination of the airflow measuring station flow coefficient, the flow and thrust coefficients of the exhaust nozzles, and the experimental and theoretical terms in the nozzle gross thrust equation were some of the objectives of the tests. A primary objective was to develop a technique to determine gross thrust for the turbojet engine used in this test that could also be used for future engine and nozzle evaluation tests. The probable error in airflow measurement was found to be approximately 0.6 percent at the bellmouth throat design Mach number of 0.6. The probable error in nozzle gross thrust measurement was approximated 0.6 percent at the load cell full-scale reading.

  2. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  3. Development of Experimental Setup of Metal Rapid Prototyping Machine using Selective Laser Sintering Technique

    NASA Astrophysics Data System (ADS)

    Patil, S. N.; Mulay, A. V.; Ahuja, B. B.

    2016-08-01

    Unlike in the traditional manufacturing processes, additive manufacturing as rapid prototyping, allows designers to produce parts that were previously considered too complex to make economically. The shift is taking place from plastic prototype to fully functional metallic parts by direct deposition of metallic powders as produced parts can be directly used for desired purpose. This work is directed towards the development of experimental setup of metal rapid prototyping machine using selective laser sintering and studies the various parameters, which plays important role in the metal rapid prototyping using SLS technique. The machine structure in mainly divided into three main categories namely, (1) Z-movement of bed and table, (2) X-Y movement arrangement for LASER movements and (3) feeder mechanism. Z-movement of bed is controlled by using lead screw, bevel gear pair and stepper motor, which will maintain the accuracy of layer thickness. X-Y movements are controlled using timing belt and stepper motors for precise movements of LASER source. Feeder mechanism is then developed to control uniformity of layer thickness metal powder. Simultaneously, the study is carried out for selection of material. Various types of metal powders can be used for metal RP as Single metal powder, mixture of two metals powder, and combination of metal and polymer powder. Conclusion leads to use of mixture of two metals powder to minimize the problems such as, balling effect and porosity. Developed System can be validated by conducting various experiments on manufactured part to check mechanical and metallurgical properties. After studying the results of these experiments, various process parameters as LASER properties (as power, speed etc.), and material properties (as grain size and structure etc.) will be optimized. This work is mainly focused on the design and development of cost effective experimental setup of metal rapid prototyping using SLS technique which will gives the feel of

  4. EXPERIMENTAL DESIGN OF A FLUID-CONTROLLED HOT GAS VALVE

    DTIC Science & Technology

    Effort is described toward development of a hot gas jet reaction valve utilizing boundary layer techniques to control a high pressure, high...temperature gas stream. The result has been the successful design of a hot gas valve in a reaction control system utilizing fluid-controlled bi-stable

  5. New head gradient coil design and construction techniques

    PubMed Central

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2013-01-01

    Purpose To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. Materials and Methods The use of the Boundary Element Method to solve for a gradient coil wire pattern on an arbitrary surface has allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design has been combined with robust manufacturing techniques and novel cooling methods. Results The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. Conclusion The ability to adapt your electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. PMID:24123485

  6. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  7. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  8. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  9. Alveolar Ridge Split Technique Using Piezosurgery with Specially Designed Tips

    PubMed Central

    Moro, Alessandro; Foresta, Enrico; Falchi, Marco; De Angelis, Paolo; D'Amato, Giuseppe; Pelo, Sandro

    2017-01-01

    The treatment of patients with atrophic ridge who need prosthetic rehabilitation is a common problem in oral and maxillofacial surgery. Among the various techniques introduced for the expansion of alveolar ridges with a horizontal bone deficit is the alveolar ridge split technique. The aim of this article is to give a description of some new tips that have been specifically designed for the treatment of atrophic ridges with transversal bone deficit. A two-step piezosurgical split technique is also described, based on specific osteotomies of the vestibular cortex and the use of a mandibular ramus graft as interpositional graft. A total of 15 patients were treated with the proposed new tips by our department. All the expanded areas were successful in providing an adequate width and height to insert implants according to the prosthetic plan and the proposed tips allowed obtaining the most from the alveolar ridge split technique and piezosurgery. These tips have made alveolar ridge split technique simple, safe, and effective for the treatment of horizontal and vertical bone defects. Furthermore the proposed piezosurgical split technique allows obtaining horizontal and vertical bone augmentation. PMID:28246596

  10. Design, data analysis and sampling techniques for clinical research.

    PubMed

    Suresh, Karthik; Thomas, Sanjeev V; Suresh, Geetha

    2011-10-01

    Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains various sampling methods that can be appropriately used in medical research with different scenarios and challenges.

  11. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  12. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  13. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  14. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  15. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  16. Contact angle hysteresis on polymer substrates established with various experimental techniques, its interpretation, and quantitative characterization.

    PubMed

    Bormashenko, Edward; Bormashenko, Yelena; Whyman, Gene; Pogreb, Roman; Musin, Albina; Jager, Rachel; Barkay, Zahava

    2008-04-15

    The effect of contact angle hysteresis (CAH) was studied on various polymer substrates with traditional and new experimental techniques. The new experimental technique presented in the article is based on the slow deformation of the droplet, thus CAH is studied under the constant volume of the drop in contrast to existing techniques when the volume of the drop is changed under the measurement. The energy of hysteresis was calculated in the framework of the improved Extrand approach. The advancing contact angle established with a new technique is in a good agreement with that measured with the needle-syringe method. The receding angles measured with three experimental techniques demonstrated a very significant discrepancy. The force pinning the triple line responsible for hysteresis was calculated.

  17. Improved Design Techniques for Switched-Capacitor Ladder Filters.

    NASA Astrophysics Data System (ADS)

    Hsu, Teng-Hsien

    Using the new developments of MOS technology, switched-capacitor filters which consist of operational amplifiers, capacitors and switches in monolithic form, were widely investigated and put into practical forms. The switched-capacitor ladder filters have derived from doubly-terminated reactance two-ports. The main part of this dissertation is aimed at improving the efficiency and eliminating some shortcomings of the bilinear design technique. Two novel input stages which incorporate the necessary sample-and-hold function into the bilinear ladder filters are presented. The circuits are insensitive to parasitic capacitances. Some techniques to reduce the number of operational amplifier for bilinear switched-capacitor ladder filters are given. The number of top-plate parasitic-sensitive capacitors is less than in any of the existing design techniques. The clock feedthrough effects of pseudo-N-path switched-capacitor filter using lowpass filters as path filters are eliminated by the improved technique with doubling the number of operational amplifiers. Two-phase pseudo -N-path switched-capacitor filters can be obtained by tripling the number of operational amplifiers. The design technique for extending bilinear lowpass switched-capacitor ladder filters from odd orders to even orders is presented. One of the factors limiting the speed of bilinear switched-capacitor ladder filters is the delay-free loops. The techniques for breaking delay-free loops of low-order switched -capacitor filters are introduced. Digital ladder filters can be obtained through those switched-capacitor filters without delay-free loops. Numerical examples are given to compare the following digital filters: general cascade realization, wave digital filter, the digital filters derived from switched-capacitor filters - cascade and ladder. An improved high speed switched-capacitor linear interpolator, and nonlinear interpolators are described. The circuits are completely parasitic-insensitive. Two

  18. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques.

  19. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  20. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  1. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  2. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  3. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation.

  4. Design and Experimental Results for the S414 Airfoil

    DTIC Science & Technology

    2010-08-01

    of most current general-aviation aircraft, including busi - ness jets , as well as unmanned aerial vehicles and all sailplanes. It does, however...RDECOM TR 10-D-112 U.S. ARMY RESEARCH, DEVELOPMENT AND ENGINEERING COMMAND TITLE: Design and Experimental Results for the S414 Airfoil AUTHOR: Dan M...Somers and Mark D. Maughmer COMPANY NAME: Airfoils , Incorporated COMPANY ADDRESS: 122 Rose Drive Port Matilda PA 16870-7535 DATE: August 2010 FINAL

  5. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  6. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  7. Design and experimental study of a novel giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  8. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  9. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current

  10. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  11. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  12. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  13. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  14. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  15. Finite element analysis and experimental verification of multilayered tissue characterization using the thermal technique.

    PubMed

    Kharalkar, Nachiket M; Valvano, Jonathan W

    2006-01-01

    The objective of this research is to develop noninvasive techniques to determine thermal properties of layered biologic structures based on measurements from the surface. The self-heated thermistor technique is evaluated both numerically and experimentally. The finite element analyses, which confirm the experimental results, are used to study the temperature profiles occurring in the thermistor-tissue system. An in vitro tissue model was constructed by placing Teflon of varying thickness between the biologic tissue and the self-heated thermistor. The experiments were performed using two different-sized thermistors on six tissue samples. A self-heated thermistor was used to determine the thermal conductivity of tissue covered by a thin layer Teflon. The results from experimental data clearly indicate that this technique can penetrate below the thin layers of Teflon and thus is sensitive to the thermal properties of the underlying tissue. The factors which may introduce error in the experimental data are (i) poor thermal/physical contact between the thermistor probe and tissue sample, and (ii) water loss from tissue during the course of experimentation. The finite element analysis was used to simulate the experimental conditions and to calculate transient temperature profile generated by the thermistor bead. The results of finite element analysis are in accordance with the experimental data.

  16. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  17. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  18. An Experimental Evaluation of the Effectiveness of Selected Techniques and Resources on Instruction in Vocational Agriculture.

    ERIC Educational Resources Information Center

    Kahler, Alan A.

    The study was designed to test new instructional techniques in vocational agriculture, determine their effectiveness on student achievement, and compare individual and group instructional techniques. Forty-eight randomly selected Iowa high school vocational agriculture programs with enrollments of 35 students or more, were selected for testing the…

  19. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  20. Considerations in Writing About Single-Case Experimental Design Studies.

    PubMed

    Skolasky, Richard L

    2016-12-01

    Single-case experimental design (SCED) studies are particularly useful for examining the processes and outcomes of psychological and behavioral studies. Accurate reporting of SCED studies is critical in explaining the study to the reader and allowing replication. This paper outlines important elements that authors should cover when reporting the results of a SCED study. Authors should provide details on the participant, independent and dependent variables under examination, materials and procedures, and data analysis. Particular emphasis should be placed on justifying the assumptions made and explaining how violations of these assumptions may alter the results of the SCED study.

  1. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  2. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  3. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  4. Design of vibration compensation interferometer for Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, G. S.; Liu, H. Q.; Jie, Y. X.; Ding, W. X.; Brower, D. L.; Zhu, X.; Wang, Z. X.; Zeng, L.; Zou, Z. Y.; Wei, X. C.; Lan, T.

    2014-11-01

    A vibration compensation interferometer (wavelength at 0.532 μm) has been designed and tested for Experimental Advanced Superconducting Tokamak (EAST). It is designed as a sub-system for EAST far-infrared (wavelength at 432.5 μm) poloarimeter/interferometer system. Two Acoustic Optical Modulators have been applied to produce the 1 MHz intermediate frequency. The path length drift of the system is lower than 2 wavelengths within 10 min test, showing the system stability. The system sensitivity has been tested by applying a periodic vibration source on one mirror in the system. The vibration is measured and the result matches the source period. The system is expected to be installed on EAST by the end of 2014.

  5. Laser induced deflection technique for absolute thin film absorption measurement: optimized concepts and experimental results

    SciTech Connect

    Muehlig, Christian; Kufert, Siegfried; Bublitz, Simon; Speck, Uwe

    2011-03-20

    Using experimental results and numerical simulations, two measuring concepts of the laser induced deflection (LID) technique are introduced and optimized for absolute thin film absorption measurements from deep ultraviolet to IR wavelengths. For transparent optical coatings, a particular probe beam deflection direction allows the absorption measurement with virtually no influence of the substrate absorption, yielding improved accuracy compared to the common techniques of separating bulk and coating absorption. For high-reflection coatings, where substrate absorption contributions are negligible, a different probe beam deflection is chosen to achieve a better signal-to-noise ratio. Various experimental results for the two different measurement concepts are presented.

  6. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  7. Logical Experimental Design and Execution in the Biomedical Sciences.

    PubMed

    Holder, Daniel J; Marino, Michael J

    2017-03-17

    Lack of reproducibility has been highlighted as a significant problem in biomedical research. The present unit is devoted to describing ways to help ensure that research findings can be replicated by others, with a focus on the design and execution of laboratory experiments. Essential components for this include clearly defining the question being asked, using available information or information from pilot studies to aid in the design the experiment, and choosing manipulations under a logical framework based on Mill's "methods of knowing" to build confidence in putative causal links. Final experimental design requires systematic attention to detail, including the choice of controls, sample selection, blinding to avoid bias, and the use of power analysis to determine the sample size. Execution of the experiment is done with care to ensure that the independent variables are controlled and the measurements of the dependent variables are accurate. While there are always differences among laboratories with respect to technical expertise, equipment, and suppliers, execution of the steps itemized in this unit will ensure well-designed and well-executed experiments to answer any question in biomedical research. © 2017 by John Wiley & Sons, Inc.

  8. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X. A.

    2011-12-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on the acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each particular design needs to be quantified. Different quantities have been used to estimate the "goodness" of a model, most of them being sensitive to the eigenvalues of the corresponding inversion problem. Here we show a comparison of results obtained using different objective functions. Then, we simulate a CSEM survey with a realistic 1D structure and discuss the optimum recording parameters determined by our method.

  9. Silicone Rubber Superstrate Loaded Patch Antenna Design Using Slotting Technique

    NASA Astrophysics Data System (ADS)

    Kaur, Bhupinder; Saini, Garima; Saini, Ashish

    2016-09-01

    For the protection of antenna from external environmental conditions, there is a need that antenna should be covered with a stable, non-reactive, highly durable and weather resistive material which is insensitive to changing external environment. Hence, in this paper silicone rubber is proposed as a superstrate layer for patch antenna for its protection. The electrical properties of silicon rubber sealant are experimentally found out and its effect of using as superstrate on coaxial fed microstrip patch antenna using transmission line model is observed. The overall performance is degraded by slightly after the use of superstrate. Further to improve the performance of superstrate loaded antenna, patch slots and ground defects have been proposed. The proposed design achieves the wideband of 790 MHz (13.59 %), gain of 7.12 dB, VSWR of 1.12 and efficiency of 83.02 %.

  10. Analysis techniques for the design of thermoplastic bumpers

    SciTech Connect

    Nimmer, R.P.; Bailey, O.A.; Paro, T.W.

    1987-01-01

    Increasingly, thermoplastic resins are being applied to automotive components which require structural performance. The work reported in this paper summarizes an ongoing effort to develop efficient mechanical technology for application in the design of thermoplastic bumpers. The technology development has included identification of material properties, investigation of basic component behavior, and finally the development of an automated system of analysis. A basic question often posed with regard to the analysis of structural components made of thermoplastics is whether the appropriate material properties are available and whether available analysis procedures can be applied accurately. This question was addressed through a program of fundamental material characterization, followed by structural component analysis. The analysis was then compared to test results from a parallel experimental program.

  11. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  12. Experimental analysis of viscous and material damping in microstructures through the interferometric microscopy technique with climatic chamber

    NASA Astrophysics Data System (ADS)

    De Pasquale, Giorgio

    2013-09-01

    This study describes an experimental analysis of energy dissipation due to damping sources in microstructures and micro-electromechanical systems (MEMS) components using interferometric microscopy techniques. Viscous damping caused by the surrounding air (squeeze film damping) and material damping are measured using variable geometrical parameters of samples and under different environmental conditions. The equipment included a self-made climatic chamber which was used to modify the surrounding air pressure. Results show the relationship between damping coefficients and sample geometry caused by variation in airflow resistance and the relationship between quality factor and air pressure. The experimental results will provide a useful data source for validating analytic models and calibrating simulations. A thorough discussion about interferometry applied to experimental mechanics of MEMS will also contribute to the reduction of the knowledge gap between specialists in optical methods and microsystem designers.

  13. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  14. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  15. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  16. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part I: Design Technique, Modeling and Simulation

    SciTech Connect

    Lupien, Vincent; Hassan, Waled

    2006-03-06

    Reductions in the beam diameter and pulse duration of focused ultrasound for titanium inspections are believed to result in a signal-to-noise ratio improvement for embedded defect detection. It has been inferred from this result that detection limits could be extended to smaller defects through a larger diameter, higher frequency transducer resulting in a reduced beamwidth and pulse duration. Using Continuum Probe Designer{sup TM} (Pat. Pending), a transducer array was developed for full coverage inspection of 8 inch titanium billets. The main challenge in realizing a large aperture phased array transducer for billet inspection is ensuring that the number of elements remains within the budget allotted by the driving electronics. The optimization technique implemented by Continuum Probe Designer{sup TM} yields an array with twice the aperture but the same number of elements as existing phased arrays for the same application. The unequal area element design was successfully manufactured and validated both numerically and experimentally. Part I of this two-part series presents the design, simulation and modeling steps, while Part II presents the experimental validation and comparative study to multizone.

  17. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  18. Experimental design in phylogenetics: testing predictions from expected information.

    PubMed

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  19. Interplanetary mission design techniques for flagship-class missions

    NASA Astrophysics Data System (ADS)

    Kloster, Kevin W.

    Trajectory design, given the current level of propulsive technology, requires knowledge of orbital mechanics, computational resources, extensive use of tools such as gravity-assist and V infinity leveraging, as well as insight and finesse. Designing missions that deliver a capable science package to a celestial body of interest that are robust and affordable is a difficult task. Techniques are presented here that assist the mission designer in constructing trajectories for flagship-class missions in the outer Solar System. These techniques are applied in this work to spacecraft that are currently in flight or in the planning stages. By escaping the Saturnian system, the Cassini spacecraft can reach other destinations in the Solar System while satisfying planetary quarantine. The patched-conic method was used to search for trajectories that depart Saturn via gravity assist at Titan. Trajectories were found that fly by Jupiter to reach Uranus or Neptune, capture at Jupiter or Neptune, escape the Solar System, fly by Uranus during its 2049 equinox, or encounter Centaurs. A "grand tour," which visits Jupiter, Uranus, and Neptune, departs Saturn in 2014. New tools were built to search for encounters with Centaurs, small Solar System bodies between the orbits of Jupiter and Neptune, and to minimize the DeltaV to target these encounters. Cassini could reach Chiron, the first-discovered Centaur, in 10.5 years after a 2022 Saturn departure. For a Europa Orbiter mission, the strategy for designing Jovian System tours that include Io flybys differs significantly from schemes developed for previous versions of the mission. Assuming that the closest approach distance of the incoming hyperbola at Jupiter is below the orbit of Io, then an Io gravity assist gives the greatest energy pump-down for the least decrease in perijove radius. Using Io to help capture the spacecraft can increase the savings in Jupiter orbit insertion DeltaV over a Ganymede-aided capture. The tour design is

  20. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  1. Design of OFDM radar pulses using genetic algorithm based techniques

    NASA Astrophysics Data System (ADS)

    Lellouch, Gabriel; Mishra, Amit Kumar; Inggs, Michael

    2016-08-01

    The merit of evolutionary algorithms (EA) to solve convex optimization problems is widely acknowledged. In this paper, a genetic algorithm (GA) optimization based waveform design framework is used to improve the features of radar pulses relying on the orthogonal frequency division multiplexing (OFDM) structure. Our optimization techniques focus on finding optimal phase code sequences for the OFDM signal. Several optimality criteria are used since we consider two different radar processing solutions which call either for single or multiple-objective optimizations. When minimization of the so-called peak-to-mean envelope power ratio (PMEPR) single-objective is tackled, we compare our findings with existing methods and emphasize on the merit of our approach. In the scope of the two-objective optimization, we first address PMEPR and peak-to-sidelobe level ratio (PSLR) and show that our approach based on the non-dominated sorting genetic algorithm-II (NSGA-II) provides design solutions with noticeable improvements as opposed to random sets of phase codes. We then look at another case of interest where the objective functions are two measures of the sidelobe level, namely PSLR and the integrated-sidelobe level ratio (ISLR) and propose to modify the NSGA-II to include a constrain on the PMEPR instead. In the last part, we illustrate via a case study how our encoding solution makes it possible to minimize the single objective PMEPR while enabling a target detection enhancement strategy, when the SNR metric would be chosen for the detection framework.

  2. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  3. Experimental design schemes for learning Boolean network models

    PubMed Central

    Atias, Nir; Gershenzon, Michal; Labazin, Katia; Sharan, Roded

    2014-01-01

    Motivation: A holy grail of biological research is a working model of the cell. Current modeling frameworks, especially in the protein–protein interaction domain, are mostly topological in nature, calling for stronger and more expressive network models. One promising alternative is logic-based or Boolean network modeling, which was successfully applied to model signaling regulatory circuits in human. Learning such models requires observing the system under a sufficient number of different conditions. To date, the amount of measured data is the main bottleneck in learning informative Boolean models, underscoring the need for efficient experimental design strategies. Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit models. Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data. We applied both approaches to simulated and real data from the EFGR and IL1 signaling systems in human. We demonstrate the utility of the developed strategies in substantially improving on a random selection approach. Our design schemes highlight the redundancy in these datasets, leading up to 11-fold savings in the number of experiments to be performed. Availability and implementation: Source code will be made available upon acceptance of the manuscript. Contact: roded@post.tau.ac.il PMID:25161232

  4. Protein design algorithms predict viable resistance to an experimental antifolate.

    PubMed

    Reeve, Stephanie M; Gainza, Pablo; Frey, Kathleen M; Georgiev, Ivelin; Donald, Bruce R; Anderson, Amy C

    2015-01-20

    Methods to accurately predict potential drug target mutations in response to early-stage leads could drive the design of more resilient first generation drug candidates. In this study, a structure-based protein design algorithm (K* in the OSPREY suite) was used to prospectively identify single-nucleotide polymorphisms that confer resistance to an experimental inhibitor effective against dihydrofolate reductase (DHFR) from Staphylococcus aureus. Four of the top-ranked mutations in DHFR were found to be catalytically competent and resistant to the inhibitor. Selection of resistant bacteria in vitro reveals that two of the predicted mutations arise in the background of a compensatory mutation. Using enzyme kinetics, microbiology, and crystal structures of the complexes, we determined the fitness of the mutant enzymes and strains, the structural basis of resistance, and the compensatory relationship of the mutations. To our knowledge, this work illustrates the first application of protein design algorithms to prospectively predict viable resistance mutations that arise in bacteria under antibiotic pressure.

  5. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  6. Recent Progress in x3-Related Optical Process Experimental Technique. Raman Lasing

    NASA Technical Reports Server (NTRS)

    Matsko, A. B.; Savchenkov, Anatoliy A.; Strekalov, Dmitry; Maleki, Lute

    2006-01-01

    We describe theoretically and verify experimentally a simple technique for analyzing conversion efficiency and threshold of ail-resonant intracavity Raman lasers. The method is based on a dependence of the ring-down time of the pump cavity mode on the energy, accumulated in the cavity.

  7. The Use of Techniques of Sensory Evaluation as a Framework for Teaching Experimental Methods.

    ERIC Educational Resources Information Center

    Bennett, R.; Hamilton, M.

    1981-01-01

    Describes sensory assessment techniques and conditions for their satisfactory performance, including how they can provide open-ended exercises and advantages as relatively inexpensive and simple methods of teaching experimentation. Experiments described focus on diffusion of salt into potatoes after being cooked in boiled salted water. (Author/JN)

  8. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  9. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are

  10. A method of fast, sequential experimental design for linearized geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Coles, Darrell A.; Morgan, Frank Dale

    2009-07-01

    An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.

  11. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  12. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  13. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  14. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  15. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  16. Design and experimental evaluation of flexible manipulator control algorithms

    SciTech Connect

    Kwon, D.S.; Hwang, D.H.; Babcock, S.M.; Kress, R.L.; Lew, J.Y.; Evans, M.S.

    1995-04-01

    Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. The use of long-reach manipulators is being seriously considered for this task. Because of high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to use hydraulic actuators and to exhibit significant structural flexibility. The controller has been designed to compensate for the hydraulic actuator dynamics by using a load-compensated velocity feedforward loop and to increase the bandwidth by using an inner pressure feedback loop. Shaping filter techniques have been applied as feedforward controllers to avoid structural vibrations during operation. Various types of shaping filter methods have been investigated. Among them, a new approach, referred to as a ``feedforward simulation filter`` that uses embedded simulation, has been presented.

  17. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  18. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  19. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  20. Numerical and experimental design of coaxial shallow geothermal energy systems

    NASA Astrophysics Data System (ADS)

    Raghavan, Niranjan

    Geothermal Energy has emerged as one of the front runners in the energy race because of its performance efficiency, abundance and production competitiveness. Today, geothermal energy is used in many regions of the world as a sustainable solution for decreasing dependence on fossil fuels and reducing health hazards. However, projects related to geothermal energy have not received their deserved recognition due to lack of computational tools associated with them and economic misconceptions related to their installation and functioning. This research focuses on numerical and experimental system design analysis of vertical shallow geothermal energy systems. The driving force is the temperature difference between a finite depth beneath the earth and its surface stimulates continuous exchange of thermal energy from sub-surface to the surface (a geothermal gradient is set up). This heat gradient is captured by the circulating refrigerant and thus, tapping the geothermal energy from shallow depths. Traditionally, U-bend systems, which consist of two one-inch pipes with a U-bend connector at the bottom, have been widely used in geothermal applications. Alternative systems include coaxial pipes (pipe-in-pipe) that are the main focus of this research. It has been studied that coaxial pipes have significantly higher thermal performance characteristics than U-bend pipes, with comparative production and installation costs. This makes them a viable design upgrade to the traditional piping systems. Analytical and numerical heat transfer analysis of the coaxial system is carried out with the help of ABAQUS software. It is tested by varying independent parameters such as materials, soil conditions and effect of thermal contact conductance on heat transfer characteristics. With the above information, this research aims at formulating a preliminary theoretical design setup for an experimental study to quantify and compare the heat transfer characteristics of U-bend and coaxial

  1. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  2. Game Design Narrative for Learning: Appropriating Adventure Game Design Narrative Devices and Techniques for the Design of Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2006-01-01

    The purpose of this conceptual analysis is to investigate how contemporary video and computer games might inform instructional design by looking at how narrative devices and techniques support problem solving within complex, multimodal environments. Specifically, this analysis presents a brief overview of game genres and the role of narrative in…

  3. A New Tour Design Technique to Enable an Enceladus Orbiter

    NASA Astrophysics Data System (ADS)

    Strange, N.; Campagnola, S.; Russell, R.

    2009-12-01

    As a result of discoveries made by the Cassini spacecraft, Saturn's moon Enceladus has emerged as a high science-value target for a future orbiter mission. [1] However, past studies of an Enceladus orbiter mission [2] found that entering Enceladus orbit either requires a prohibitively large orbit insertion ΔV (> 3.5 km/s) or a prohibitively long flight time. In order to reach Enceladus with a reasonable flight time and ΔV budget, a new tour design method has been developed that uses gravity-assists of the low-mass moons Rhea, Dione, and Tethys combined with v-infinity leveraging maneuvers. This new method can achieve Enceladus orbit with a combined leveraging and insertion ΔV of ~1 km/s and a 2.5 year Saturn tour. Among many challenges in designing a trajectory for an Enceladus mission, the two most prominent arise because Enceladus is a low mass moon (its GM is only ~7 km^2/s^2), deep within Saturn's gravity well (its orbit is at 4 Saturn radii). Designing ΔV-efficient rendezvous with Enceladus is the first challenge, while the second involves finding a stable orbit which can achieve the desired science measurements. A paper by Russell and Lara [3] has recently addressed the second problem, and a paper this past August by Strange, Campagnola, and Russell [4] has adressed the first. This method developed to solve the second problem, the leveraging tour, and the science possibilities of this trajectory will be the subject of this presentation. the new methods in [4], a leveraging tour with Titan, Rhea, Dione, and Tethys can reach Enceladus orbit with less than half of the ΔV of a direct Titan-Enceladus transfer. Starting from the TSSM Saturn arrival conditions [5], with a chemical bi-prop system, this new tour design technique could place into Enceladus orbit ~2800 kg compared to ~1100 kg from a direct Titan-Enceladus transfer. Moreover, the 2.5 year leveraging tour provides many low-speed and high science value flybys of Rhea, Dione, and Tethys. This exciting

  4. Experimental demonstration of a damage detection technique for nonlinear hysteretic structures

    NASA Astrophysics Data System (ADS)

    Yang, Jann N.; Xia, Ye; Loh, Chin-Hsiung

    2011-04-01

    Many civil and mechanical engineering structures exhibit nonlinear hysteretic behavior when subject to dynamic loads, such as earthquakes. The modeling and identification of non-linear hysteretic systems with stiffness and strength degradations is a practical but challenging problem encountered in the engineering field. A recently developed technique, referred to as the adaptive quadratic sum-square error with unknown inputs (AQSSE-UI), is capable of identifying time dependant parameters of nonlinear hysteretic structures. In this paper, the AQSSE-UI technique is applied to the parametric identification of nonlinear hysteretic reinforced concrete structures with stiffness and strength degradations, and the performance of the AQSSE technique is demonstrated by the experimental test data. A 1/3 scaled 2-story RC frame has been tested experimentally on the shake table at NCREE, Taiwan. This 2-story RC frame was subject to different levels of ground excitations back to back. The structure is firstly considered as an equivalent linear model with time-varying stiffness parameters, and the tracking of the degradation of the stiffness parameters is carried out using the AQSSE-UI technique. Then the same RC frame is considered as a nonlinear hysteretic model with inelastic hinges following the generalized Bouc-Wen model, and the time-varying nonlinear parameters are identified again using the AQSSE-UI technique. Experimental results demonstrate that the AQSSE technique is quite effective for the tracking of: (i) the stiffness degradation of linear structures, and (ii) the non-linear hysteretic parameters with stiffness and strength degradations.

  5. Experimental generation of longitudinally-modulated electron beams using an emittance exchange technique

    SciTech Connect

    Sun, Y.-E; Piot, P.; Johnson, A.; Lumpkin, A.; Maxwell, T.; Ruan, J.; Thurman-Keup, R.; /FERMILAB

    2010-08-01

    We report our experimental demonstration of longitudinal phase space modulation using a transverse-to-longitudinal emittance exchange technique. The experiment is carried out at the A0 photoinjector at Fermi National Accelerator Lab. A vertical multi-slit plate is inserted into the beamline prior to the emittance exchange, thus introducing beam horizontal profile modulation. After the emittance exchange, the longitudinal phase space coordinates (energy and time structures) of the beam are modulated accordingly. This is a clear demonstration of the transverse-to-longitudinal phase space exchange. In this paper, we present our experimental results on the measurement of energy profile as well as numerical simulations of the experiment.

  6. Experimental verification of pulse-probing technique for improving phase coherence grating lobe suppression.

    PubMed

    Torbatian, Zahra; Adamson, Rob; Brown, Jeremy A

    2013-07-01

    Fabrication of high-frequency phased-array ultrasound transducers is challenging because of the small element- to-element pitch required to avoid large grating lobes appearing in the field-of-view. Phase coherence imaging (PCI) was recently proposed as a highly effective technique to suppress grating lobes in large-pitch arrays for synthetic aperture beamforming. Our previous work proposed and theoretically validated a technique called pulse probing for improving grating lobe suppression when transmit beamforming is used with PCI. The present work reports the experimental verification of the proposed technique, in which the data was collected using a high-frequency ultrasound system and the processing was done offline. The data was collected with a 50-MHz, 256-element, 1.26 λ-pitch linear array, for which only the central 64-elements were used as the full aperture while the beam was steered to various angles. By sending a defocused pulse, the PCI weighting factors could be calculated, and were subsequently applied to the conventional transmit-receive beamforming. The experimental two-way radiation patterns showed that the grating lobe level was suppressed approximately 40 dB using the proposed technique, consistent with the theory. The suppression of overlapping grating lobes in reconstructed phased array images from multiple wire-phantoms in a water bath and tissue phantoms further validated the effectiveness of the proposed technique. The application of pulse probing along with PCI should simplify the fabrication of large-pitch phased arrays at high frequencies.

  7. Analytical and experimental evaluation of techniques for the fabrication of thermoplastic hologram storage devices

    NASA Technical Reports Server (NTRS)

    Rogers, J. W.

    1975-01-01

    The results of an experimental investigation on recording information on thermoplastic are given. A description was given of a typical fabrication configuration, the recording sequence, and the samples which were examined. There are basically three configurations which can be used for the recording of information on thermoplastic. The most popular technique uses corona which furnishes free charge. The necessary energy for deformation is derived from a charge layer atop the thermoplastic. The other two techniques simply use a dc potential in place of the corona for deformation energy.

  8. Solar Ion Sputter Deposition in the Lunar Regolith: Experimental Simulation Using Focused-Ion Beam Techniques

    NASA Technical Reports Server (NTRS)

    Christoffersen, R.; Rahman, Z.; Keller, L. P.

    2012-01-01

    As regions of the lunar regolith undergo space weathering, their component grains develop compositionally and microstructurally complex outer coatings or "rims" ranging in thickness from a few 10 s to a few 100's of nm. Rims on grains in the finest size fractions (e.g., <20 m) of mature lunar regoliths contain optically-active concentrations of nm size metallic Fe spherules, or "nanophase Fe(sup o)" that redden and attenuate optical reflectance spectral features important in lunar remote sensing. Understanding the mechanisms for rim formation is therefore a key part of connecting the drivers of mineralogical and chemical changes in the lunar regolith with how lunar terrains are observed to become space weathered from a remotely-sensed point of view. As interpreted based on analytical transmission electron microscope (TEM) studies, rims are produced from varying relative contributions from: 1) direct solar ion irradiation effects that amorphize or otherwise modify the outer surface of the original host grain, and 2) nanoscale, layer-like, deposition of extrinsic material processed from the surrounding soil. This extrinsic/deposited material is the dominant physical host for nanophase Fe(sup o) in the rims. An important lingering uncertainty is whether this deposited material condensed from regolith components locally vaporized in micrometeorite or larger impacts, or whether it formed as solar wind ions sputtered exposed soil and re-deposited the sputtered ions on less exposed areas. Deciding which of these mechanisms is dominant, or possibility exclusive, has been hampered because there is an insufficient library of chemical and microstructural "fingerprints" to distinguish deposits produced by the two processes. Experimental sputter deposition / characterization studies relevant to rim formation have particularly lagged since the early post-Apollo experiments of Hapke and others, especially with regard to application of TEM-based characterization techniques. Here

  9. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  10. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  11. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  12. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  13. An Investigation of Experimental Techniques for Obtaining Particulate Behavior in Metallized Solid Propellant Combustion

    DTIC Science & Technology

    1982-07-01

    experimental techniques have been used: a. High speed cinematography for the observation of strand burners within a combustion bomb and a 2-D slab...can help improve the latter. Other disadvantages of high speed cinematography include the image reso- lutions due to the optics employed and the film...Translation of the detector was done manually by 13j means of a micrometer. The glass beads were suspended in distilled water contained in a home

  14. Which is better for optimizing the biosorption process of lead - central composite design or the Taguchi technique?

    PubMed

    Azari, Ali; Mesdaghinia, Alireza; Ghanizadeh, Ghader; Masoumbeigi, Hossein; Pirsaheb, Meghdad; Ghafari, Hamid Reza; Khosravi, Touba; Sharafi, Kiomars

    2016-09-01

    The aim of this study is to evaluate central composite design (CCD) and the Taguchi technique in the adsorption process. Contact time, initial concentration, and pH were selected as the variables, and the removal efficiency of Pb was chosen for the designated response. In addition, face-centered CCD and the L9 orthogonal array were used for the experimental design. The result indicated that, at optimum conditions, the removal efficiency of Pb was 80%. However, the value of R(2) was greater than 0.95 for both the CCD and Taguchi techniques, which revealed that both techniques were suitable and in conformity with each other. Moreover, the results of analysis of variance and Prob > F < 0.05 showed the appropriate fit of the designated model with the experimental results. The probability of classifying the contributing variables by giving a percentage of the response quantity (Pb removal) made the Taguchi model an appropriate method for examining the effectiveness of different factors. pH was evaluated as the best input factor as it contributed 66.2% of Pb removal. The Taguchi technique was additionally confirmed by three-dimensional contour plots of CCD. Consequently, the Taguchi method with nine experimental runs and easy interaction plots is an appropriate substitute for CCD for several chemical engineering functions.

  15. Experimental study of liquid level gauge for liquid hydrogen using Helmholtz resonance technique

    NASA Astrophysics Data System (ADS)

    Nakano, Akihiro; Nishizu, Takahisa

    2016-07-01

    The Helmholtz resonance technique was applied to a liquid level gauge for liquid hydrogen to confirm the applicability of the technique in the cryogenic industrial field. A specially designed liquid level gauge that has a Helmholtz resonator with a small loudspeaker was installed in a glass cryostat. A swept frequency signal was supplied to the loudspeaker, and the acoustic response was detected by measuring the electrical impedance of the loudspeaker's voice coil. The penetration depth obtained from the Helmholtz resonance frequency was compared with the true value, which was read from a scale. In principle, the Helmholtz resonance technique is available for use with liquid hydrogen, however there are certain problems as regards practical applications. The applicability of the Helmholtz resonance technique to liquid hydrogen is discussed in this study.

  16. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  17. Experimental Designs for Testing Differences in Survival Among Salmonid Populations.

    SciTech Connect

    Hoffman, Annette; Busack, Craig; Knudsen, Craig

    1994-11-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing habitat are limiting factors to population abundance of spring chinook salmon (Oncorhynchus tshawyacha). Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  18. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGES

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; ...

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  19. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  20. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  1. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes.

  2. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  3. Experimental comparison between speckle and grating-based imaging technique using synchrotron radiation X-rays.

    PubMed

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal

    2016-08-08

    X-ray phase contrast and dark-field imaging techniques provide important and complementary information that is inaccessible to the conventional absorption contrast imaging. Both grating-based imaging (GBI) and speckle-based imaging (SBI) are able to retrieve multi-modal images using synchrotron as well as lab-based sources. However, no systematic comparison has been made between the two techniques so far. We present an experimental comparison between GBI and SBI techniques with synchrotron radiation X-ray source. Apart from the simple experimental setup, we find SBI does not suffer from the issue of phase unwrapping, which can often be problematic for GBI. In addition, SBI is also superior to GBI since two orthogonal differential phase gradients can be simultaneously extracted by one dimensional scan. The GBI has less stringent requirements for detector pixel size and transverse coherence length when a second or third grating can be used. This study provides the reference for choosing the most suitable technique for diverse imaging applications at synchrotron facility.

  4. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  5. Experimental design for the evaluation of struvite sedimentation obtained from an ammonium concentrated wastewater.

    PubMed

    Castro, Samuel Rodrigues; Araújo, Mahira Adna Cota; Lange, Liséte Celina

    2013-01-01

    Chemical precipitation of struvite as a technique of ammonium nitrogen (NH(4)-N) removal from concentrated wastewater has been shown to be an attractive alternative due to its high effectiveness, reaction rate, simplicity, environmental sustainability and, especially, the application potential of the generated solids for the fertilizer industry. The technique of experimental design has been used in order to identify and evaluate the optimum conditions of chemical precipitation reaction applied in a struvite sedimentation study. The preliminary tests were performed using synthetic effluent with a concentration equal to 500.0 mg N L(-1). The stoichiometric ratio Mg:NH(4):PO(4) equal to 1.5:1.0:1.25 and pH equal to 8.5 were taken to be the optimum conditions, where a NH(4)-N removal equal to 98.6% was achieved with only 10-min reaction time. This condition has been used to evaluate the struvite sedimentation from synthetic wastewaters, intending to check the optimum conditions achieved by the experimental design in different initial concentrations, 1,000 and 2,000 mg N L(-1). The results were typical of a good zonal sedimentation and can be used in the scale up the system.

  6. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  7. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  8. Simulation verification techniques study: Simulation self test hardware design and techniques report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.

  9. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  10. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  11. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  12. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  13. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  14. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  15. Experimental research of the synthetic jet generator designs based on actuation of diaphragm with piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Rimasauskiene, R.; Matejka, M.; Ostachowicz, W.; Kurowski, M.; Malinowski, P.; Wandowski, T.; Rimasauskas, M.

    2015-01-01

    Experimental analyses of four own developed synthetic jet generator designs were presented in this paper. The main task of this work was to find the most appropriate design of the synthetic jet generator. Dynamic characteristics of the synthetic jet generator's diaphragm with piezoelectric material were measured using non-contact measuring equipment laser vibrometer Polytec®PSV 400. Temperatures of the piezoelectric diaphragms working in resonance frequency were measured with Fiber Bragg Grating (FBG) sensor. Experimental analysis of the synthetic jet generator amplitude-frequency characteristics were performed using CTA (hot wire anemometer) measuring techniques. Piezoelectric diaphragm in diameter of 27 mm was excited by sinusoidal voltage signal and it was fixed tightly inside the chamber of the synthetic jet generator. The number of the synthetic jet generator orifices (1 or 3) and volume of cavity (height of cavity vary from 0.5 mm to 1.5 mm) were changed. The highest value of the synthetic jet velocity 25 m/s was obtained with synthetic jet generator which has cavity 0.5 mm and 1 orifice (resonance frequency of the piezoelectric diaphragm 2.8 kHz). It can be concluded that this type of the design is preferred in order to get the peak velocity of the synthetic jet.

  16. Aberration Theory - A Spectrum Of Design Techniques For The Perplexed

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1986-10-01

    The early medieval scholar Maimonides wrote a famous book called "Guide for the Perplexed", which explained various thorny philosophical and religious questions for the benefit of the puzzled novice. I wish I had had such a person to guide me when I first started a career in lens design. There the novice is often struck by how much of an "art" this endeavor is. The best bet, for a beginner with no experience, should be to turn to optical aberration theory - which, in principle, should explain much of what goes into designing an optical system. Unfortunately, this subject is usually presented in the form of proofs and derivations, with little time spent on the practical implications of aberration theory. Furthermore, a new generation of lens designers, who grew up with the computer, often consider aberration theory as an unnecessary relic from the past. My career, by contrast, is based on the conviction that using the results of aberration theory is the only intelligent way to design optical systems. Computers are an invaluable aide, but we must, ultimately, bite the bullet and think. Along these lines, I have given several papers over the last few years which deal directly with the philosophy of lens design; the kind of guides for the perplexed that I wished I had had from the start. These papers include: "Lens design on a desert island - A simple method of optical design", "A modular method of optical design", "Optical design with air lenses", "Optical design with 'phantom' aspherics", "Optical design methods: your head as a personal computer", "Aberration theory and the meaning of life", and a paper at Innsbruck - "Some interesting correspondences in aberration theory". In all cases, the emphasis is on using your head to think, and the computer to help you out with the numerical work and the "fine-tuning" of a design. To hope that the computer will do the thinking for you is folly. Solutions gained by this route rarely equal the results of an experienced and

  17. Utilizing Project Management Techniques in the Design of Instructional Materials.

    ERIC Educational Resources Information Center

    Murphy, Charles

    1994-01-01

    Discussion of instructional design in large organizations highlights a project management approach. Topics addressed include the role of the instructional designer; project team selection; role of the team members; role of the project manager; focusing on what employees need to know; types of project teams; and monitoring time and responsibility.…

  18. A new strategy in drug design of Chinese medicine: theory, method and techniques.

    PubMed

    Yang, Hong-Jun; Shen, Dan; Xu, Hai-Yu; Lu, Peng

    2012-11-01

    The research and development (R&D) process of Chinese medicine, with one notable feature, clinical application based, is significantly different from which of chemical and biological medicine, from laboratory research to clinics. Besides, compound prescription is another character. Therefore, according to different R&D theories between Chinese and Western medicine, we put forward a new strategy in drug design of Chinese medicine, which focuses on "combination-activity relationship (CAR)", taking prescription discovery, component identification and formula optimization as three key points to identify the drugs of high efficacy and low toxicity. The method of drug design of Chinese medicine includes: new prescription discovery based on clinical data and literature information, component identification based on computing and experimental research, as well as formula optimization based on system modeling. This paper puts forward the concept, research framework and techniques of drug design of Chinese medicine, which embodies the R&D model of Chinese medicine, hoping to support the drug design of Chinese medicine theoretically and technologically.

  19. Experimental verification of a computational technique for determining ground reactions in human bipedal stance.

    PubMed

    Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2007-01-01

    We have developed a three-dimensional (3D) biomechanical model of human standing that enables us to study the mechanisms of posture and balance simultaneously in various directions in space. Since the two feet are on the ground, the system defines a kinematically closed-chain which has redundancy problems that cannot be resolved using the laws of mechanics alone. We have developed a computational (optimization) technique that avoids the problems with the closed-chain formulation thus giving users of such models the ability to make predictions of joint moments, and potentially, muscle activations using more sophisticated musculoskeletal models. This paper describes the experimental verification of the computational technique that is used to estimate the ground reaction vector acting on an unconstrained foot while the other foot is attached to the ground, thus allowing human bipedal standing to be analyzed as an open-chain system. The computational approach was verified in terms of its ability to predict lower extremity joint moments derived from inverse dynamic simulations performed on data acquired from four able-bodied volunteers standing in various postures on force platforms. Sensitivity analyses performed with model simulations indicated which ground reaction force (GRF) and center of pressure (COP) components were most critical for providing better estimates of the joint moments. Overall, the joint moments predicted by the optimization approach are strongly correlated with the joint moments computed using the experimentally measured GRF and COP (0.78 < or = r(2) < or = 0.99,median,0.96) with a best-fit that was not statistically different from a straight line with unity slope (experimental=computational results) for postures of the four subjects examined. These results indicate that this model-based technique can be relied upon to predict reasonable and consistent estimates of the joint moments using the predicted GRF and COP for most standing postures.

  20. Experimental Technique and Assessment for Measuring the Convective Heat Transfer Coefficient from Natural Ice Accretions

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. Cyril; Vanfossen, G. James, Jr.; Dewitt, Kenneth J.; Dukhan, Nihad

    1995-01-01

    A technique was developed to cast frozen ice shapes that had been grown on a metal surface. This technique was applied to a series of ice shapes that were grown in the NASA Lewis Icing Research Tunnel on flat plates. Nine flat plates, 18 inches square, were obtained from which aluminum castings were made that gave good ice shape characterizations. Test strips taken from these plates were outfitted with heat flux gages, such that when placed in a dry wind tunnel, can be used to experimentally map out the convective heat transfer coefficient in the direction of flow from the roughened surfaces. The effects on the heat transfer coefficient for both parallel and accelerating flow will be studied. The smooth plate model verification baseline data as well as one ice roughened test case are presented.

  1. New experimental method for lidar overlap factor using a CCD side-scatter technique.

    PubMed

    Wang, Zhenzhu; Tao, Zongming; Liu, Dong; Wu, Decheng; Xie, Chenbo; Wang, Yingjian

    2015-04-15

    In theory, lidar overlap factor can be derived from the difference between the particle backscatter coefficient retrieved from lidar elastic signal without overlap correction and the actual particle backscatter coefficient, which can be obtained by other measured techniques. The side-scatter technique using a CCD camera is testified to be a powerful tool to detect the particle backscatter coefficient in near ground layer during night time. A new experiment approach to determine the overlap factor for vertically pointing lidar is presented in this study, which can be applied to Mie lidars. The effect of overlap factor on Mie lidar is corrected by an iteration algorithm combining the retrieved particle backscatter coefficient using CCD side-scatter method and Fernald method. This method has been successfully applied to Mie lidar measurements during a routine campaign, and the comparison of experimental results in different atmosphere conditions demonstrated that this method is available in practice.

  2. Experimental Evaluation of the IP Address Space Randomisation (IASR) Technique and Its Disruption to Selected Network Services

    DTIC Science & Technology

    2014-11-01

    Experimental evaluation of the IP address space randomisation (IASR) technique and its disruption to selected network services Maxwell Dondo DRDC...secu- rity approach. MTD is a set of network defence techniques such as randomisation, deception, etc., that significantly increases the attacker’s work...effort. One randomi- sation technique , called internet protocol (IP) address space randomisation (IASR), periodically or aperiodically makes random

  3. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  4. A new experimental device to evaluate eye ulcers using a multispectral electrical impedance technique

    NASA Astrophysics Data System (ADS)

    Bellotti, Mariela I.; Bast, Walter; Berra, Alejandro; Bonetto, Fabián J.

    2011-07-01

    We present a novel experimental technique to determine eye ulcers in animals using a spectral electrical impedance technique. We expect that this technique will be useful in dry eye syndrome. We used a sensor that is basically a platinum (Pt) microelectrode electrically insulated by glass from a cylindrical stainless steel counter-electrode. This sensor was applied to the naked eye of New Zealand rabbits (2.0-3.5 kg in weight). Whereas half of the eyes were normal (control), we applied to the remainder a few drops of 20% (v/v) alcohol to produce an ulcer in the eye. Using a multispectral electrical impedance system we measured ulcerated and control eyes and observed significant difference between normal and pathological samples. We also investigated the effects of different applied pressures and natural degradation of initially normal eyes as a function of time. We believe that this technique could be sufficiently sensitive and repetitive to help diagnose ocular surface diseases such as dry eye syndrome.

  5. Simulated and experimental technique optimization of dual-energy radiography: abdominal imaging applications

    NASA Astrophysics Data System (ADS)

    Sabol, John M.; Wheeldon, Samuel J.; Jabri, Kadri N.

    2006-03-01

    With growing clinical acceptance of dual-energy chest radiography, there is increased interest in the application of dual-energy techniques to other clinical areas. This paper describes the creation and experimental validation of a poly-energetic signal-propagation model for technique optimization of new dual-energy clinical applications. The model is verified using phantom experiments simulating typical abdominal radiographic applications such as Intravenous Urography (IVU) and the detection of pelvic and sacral bone lesions or kidney stones in the presence of bowel gas. The model is composed of a spectral signal propagation component and an image-processing component. The spectral propagation component accepts detector specifications, X-ray spectra, phantom and imaging geometry as inputs, and outputs the detected signal and estimated noise. The image-processing module performs dual-energy logarithmic subtraction and returns figures-of-merit such as contrast and contrast-to-noise ratio (CNR), which are evaluated in conjunction with Monte Carlo calculations of dose. Phantoms assembled from acrylic, aluminum, and iodinated contrast-agent filled tubes were imaged using a range of kVp's and dose levels. Simulated and experimental results were compared by dose, clinical suitability, and system limitations in order to yield technique recommendations that optimize one or more figures-of-merit. The model accurately describes phantom images obtained in a low scatter environment. For the visualization of iodinated vessels in the abdomen and the detection of pelvic bone lesions, both simulated and experimental results indicate that dual-energy techniques recommended by the model yield significant improvements in CNR without significant increases in patient dose as compared to conventional techniques. For example the CNR of iodinated vessels can be doubled using two-thirds of the dose of a standard exam. Alternatively, in addition to a standard dose image, the clinician can

  6. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  7. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    the Autocoding Toolset. . . . . . . . . . . . . . . 56 5.8 The ported SAR system in Ptolemy II. . . . . . . . . . . . . . . . . . 57 5.9 SAR...simulation results in Ptolemy II and the Autocoding Toolset. . . 57 viii 6.1 DIF-to-C software synthesis framework. . . . . . . . . . . . . . . . . . 61 6.2 A...LabVIEW from National Instruments, and Ptolemy II from U.C. Berkeley, to name a few. In model-based design methodolo- gies, design representations in terms

  8. Analytical and experimental studies of the helical magnetohydrodynamic thruster design

    SciTech Connect

    Gilbert, J.B. II; Lin, T.F.

    1994-12-31

    This paper describes the results of analytical and experimental studies of a helical magnetohydrodynamic (MHD) seawater thruster using a 8-Tesla (T) solenoid magnet. The application of this work is in marine vehicle propulsion. Analytical models are developed to predict the performance of the helical MHD thruster in a closed-loop condition. The analytical results are compared with experimental data and good agreement is obtained.

  9. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  10. Laboratory prototype of cochlear implant: design and techniques.

    PubMed

    Ali, Hussnain; Ahmad, Talha J; Ajaz, Asim; Khan, Shoab A

    2009-01-01

    This paper presents design overview of a low cost prototype of Cochlear Implant developed from commercial off-the-shelf components. Design scope includes speech processing module implemented on a commercial digital signal processor, transcutaneous data and power transceiver developed from a single pair of inductive coils and finally a stimulator circuitry for cochlear stimulation. Different speech processing strategies such as CIS, SMSP and F0/F1 have been implemented and tested using a novel, indigenously developed speech processing research module which evaluates the performance of speech processing strategies in software, hardware and practical scenarios. Design overview, simulations and practical results of an optimized inductive link using Class E Power Amplifier are presented. Link was designed at a carrier frequency of 2.5MHz for 100mW output power. Receiver logic design and stimulator circuitry was implemented using a PIC microcontroller and off-the-shelf electronic components. Results indicate 40% link efficiency with 128kbps data transfer rate. This low cost prototype can be used for undertaking cochlear implant research in laboratories.

  11. Experimental and Imaging Techniques for Examining Fibrin Clot Structures in Normal and Diseased States

    PubMed Central

    Fan, Natalie K.; Keegan, Philip M.; Platt, Manu O.; Averett, Rodney D.

    2015-01-01

    Fibrin is an extracellular matrix protein that is responsible for maintaining the structural integrity of blood clots. Much research has been done on fibrin in the past years to include the investigation of synthesis, structure-function, and lysis of clots. However, there is still much unknown about the morphological and structural features of clots that ensue from patients with disease. In this research study, experimental techniques are presented that allow for the examination of morphological differences of abnormal clot structures due to diseased states such as diabetes and sickle cell anemia. Our study focuses on the preparation and evaluation of fibrin clots in order to assess morphological differences using various experimental assays and confocal microscopy. In addition, a method is also described that allows for continuous, real-time calculation of lysis rates in fibrin clots. The techniques described herein are important for researchers and clinicians seeking to elucidate comorbid thrombotic pathologies such as myocardial infarctions, ischemic heart disease, and strokes in patients with diabetes or sickle cell disease. PMID:25867016

  12. Application of Monte Carlo technique to time-resolved transillumination: a comparison with experimental data

    NASA Astrophysics Data System (ADS)

    Scampoli, Paola; Curto, C. A.; Guida, Giovanni; Roberti, Giuseppe

    1998-01-01

    The growing number of laser applications in medicine and biology has determined a renewed interest on the study of the light transport in turbid media such as biological tissues. One of the most powerful methods used to describe this kind of process is given by the Monte Carlo techniques. We have developed a FORTRAN90 code, running on an Alpha Vax AXP DEC 2100 to simulate the transport of a photon beam with a Gaussian temporal and spatial profile through a multilayered sample. The code provides the sample transmittance and reflectance (both time and space resolved) that can be compared to the experimental data. Monte Carlo calculations have been performed to simulate time-resolved transillumination through water latex and intralipid water solutions with optical properties similar to those of biological tissues. The comparison of Monte Carlo results with experimental data and with analytical solutions to diffusion equation shows a good agreement, suggesting that Monte Carlo techniques are indeed a powerful tool for predictions on light transport in turbid media.

  13. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  14. Robust control design techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay; Bachmann, Glen R.

    1994-01-01

    In this paper, an active flutter suppression problem is studied for a thin airfoil in unsteady aerodynamics. The mathematical model of this system is infinite dimensional because of Theodorsen's function which is irrational. Several second order approximations of Theodorsen's function are compared. A finite dimensional model is obtained from such an approximation. We use H infinity control techniques to find a robustly stabilizing controller for active flutter suppression.

  15. Joint Tactics, Techniques, and Procedures for Laser Designation Operations

    DTIC Science & Technology

    2007-11-02

    0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 28 MAY 1999 2. REPORT TYPE N/A 3. DATES...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 PREFACE i 1 . Scope This publication provides joint tactics, techniques

  16. Advanced study techniques: tools for HVDC systems design

    SciTech Connect

    Degeneff, R.C.

    1984-01-01

    High voltage direct current (HVDC) transmission systems, which offer functional as well as environmental and economic advantages, could see a 15% growth rate over the next decade. Design studies of HVDC system components are complicated by the need to cover 11 major elements: power system, insulation coordination, filter design, subsynchronous torsional interaction, circuit breaker requirements, power line carrier and radio interference, electric fields and audible noise, protective relaying, availability and reliability, efficiency, equipment specification, and HVDC simulator and Transient Network Analyzers. The author summarizes and illustrates each element. 6 figures, 1 table.

  17. Respiratory protective device design using control system techniques

    NASA Technical Reports Server (NTRS)

    Burgess, W. A.; Yankovich, D.

    1972-01-01

    The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.

  18. Design Techniques for Radiation Hardened Phase-Locked Loops

    DTIC Science & Technology

    2005-08-23

    Nemmani, M. Vandepas , K. Okk, K. Mayaram, and U. Moon, “Radiation hard PLL design tolerant to noise and process variations,” in CDADIC report, July...2004. [8] M. Vandepas , K. Ok, A. N. Nemmani, M. Brownlee, K. Mayaram, and U.-K. Moon, “Characterization of 1.2GHz phase locked loops and voltage...controlled oscillators in a total dose radiation environment,” in Proceedings of 2005 MAPLD International Conference, Sept. 2005. [9] M. Vandepas , “Design of

  19. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  20. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  1. Investigation on experimental techniques to detect, locate and quantify gear noise in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Flanagan, P. M.; Atherton, W. J.

    1985-01-01

    A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.

  2. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  3. Findings in Experimental Psychology as Functioning Principles of Theatrical Design.

    ERIC Educational Resources Information Center

    Caldwell, George

    A gestalt approach to theatrical design seems to provide some ready and stable explanations for a number of issues in the scenic arts. Gestalt serves as the theoretical base for a number of experiments in psychology whose findings appear to delineate the principles of art to be used in scene design. The fundamental notion of gestalt theory…

  4. New experimental technique for the measurement of the velocity field in thin films falling over obstacles

    NASA Astrophysics Data System (ADS)

    Landel, Julien R.; Daglis, Ana; McEvoy, Harry; Dalziel, Stuart B.

    2014-11-01

    We present a new experimental technique to measure the surface velocity of a thin falling film. Thin falling films are important in various processes such as cooling in heat exchangers or cleaning processes. For instance, in a household dishwasher cleaning depends on the ability of a thin draining film to remove material from a substrate. We are interested in the impact of obstacles attached to a substrate on the velocity field of a thin film flowing over them. Measuring the velocity field of thin falling films is a challenging experimental problem due to the small depth of the flow and the large velocity gradient across its depth. We propose a new technique based on PIV to measure the plane components of the velocity at the surface of the film over an arbitrarily large area and an arbitrarily large resolution, depending mostly on the image acquisition technique. We perform experiments with thin films of water flowing on a flat inclined surface, made of glass or stainless steel. The typical Reynolds number of the film is of the order of 100 to 1000, computed using the surface velocity, the film thickness and the kinematic viscosity of the film. We measure the modification to the flow field, from a viscous-gravity regime, caused by small solid obstacles, such as three-dimensional hemispherical obstacles and two-dimensional steps. We compare our results with past theoretical and numerical studies. This material is based upon work supported by the Defense Threat Reduction Agency under Contract No. HDTRA1-12-D-0003-0001.

  5. Preliminary Experimental Results on the Technique of Artificial River Replenishment to Mitigate Sediment Loss Downstream Dams

    NASA Astrophysics Data System (ADS)

    Franca, M. J.; Battisacco, E.; Schleiss, A. J.

    2014-12-01

    The transport of sediments by water throughout the river basins, from the steep slopes of the upstream regions to the sea level, is recognizable important to keep the natural conditions of rivers with a role on their ecology processes. Over the last decades, a reduction on the supply of sand and gravel has been observed downstream dams existing in several alpine rivers. Many studies highlight that the presence of a dam strongly modifies the river behavior in the downstream reach, in terms of morphology and hydrodynamics, with consequences on local ecology. Sediment deficit, bed armoring, river incision and bank instability are the main effects which affect negatively the aquatic habitats and the water quality. One of the proposed techniques to solve the problem of sediment deficit downstream dams, already adopted in few Japanese and German rivers although on an unsatisfactory fashion, is the artificial replenishment of these. Generally, it was verified that the erosion of the replenishments was not satisfactory and the transport rate was not enough to move the sediments to sufficient downstream distances. In order to improve and to provide an engineering answer to make this technique more applicable, a series of laboratory tests are ran as preparatory study to understand the hydrodynamics of the river flow when the replenishment technique is applied. Erodible volumes, with different lengths and submergence conditions, reproducing sediment replenishments volumes, are positioned along a channel bank. Different geometrical combinations of erodible sediment volumes are tested as well on the experimental flume. The first results of the experimental research, concerning erosion time evolution, the influence of discharge and the distance travelled by the eroded sediments, will be presented and discussed.

  6. Parameter Space Techniques for Robust Control System Design.

    DTIC Science & Technology

    1980-07-01

    been further investi- gated by Cruz [2] and Desoer and Wang [3]. In frequency design methods the concept to compensate the loop, such that high gains...of Feedback Systems, McGraw-Hill, New York, 1972. 3. C. A. Desoer and Y. T. Wang, "Foundations of Feedback Theory for Nonlinear Dynamical Systems

  7. Teaching by Design: Tools and Techniques to Improve Instruction

    ERIC Educational Resources Information Center

    Burke, Jim

    2015-01-01

    The Common Core State Standards (CCSS) and other state standards have challenged teachers to rethink how they plan units and design their assignments within constraints of time and increasingly diverse classrooms. This article describes the author's efforts to create a coherent, useable set of tools to make his teaching at the unit and daily…

  8. Experimental investigation of contamination prevention techniques to cryogenic surfaces on board orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Hetrick, M. A.; Rantanen, R. O.; Ress, E. B.; Froechtenigt, J. F.

    1978-01-01

    Within the simulation limitations of on-orbit conditions, it was demonstrated that a helium purge system could be an effective method for reducing the incoming flux of contaminant species. Although a generalized purge system was employed in conjunction with basic telescope components, the simulation provided data that could be used for further modeling and design of a specific helium injection system. Experimental telescope pressures required for 90% attenuation appeared to be slightly higher (factor of 2 to 5). Cooling the helium purge gas and telescope components from 300 to 140 K had no measurable effect on stopping efficiency of a given mass flow of helium from the diffuse injector.

  9. An experimental technique to study impulse-wave propagation in materials

    NASA Astrophysics Data System (ADS)

    Yazdani-Ardakani, S.; Kesavan, S. K.; Chu, M. L.

    1986-01-01

    The dynamic characteristics of materials are studied with a technique which uses a mechanical shaker to subject vertically mounted specimens to impulsive forces. The mounting of the specimens over the mechanical shaker with a suspension system is examined. The components of the coupling assembly are described. The electric circuitries for the adjustment of shaker-platform height and suspending-wire tension, and for the acceleration-response measurement of impacted specimens are diagramatically presented. The procedures for studying impulse-wave propagation in materials are discussed; accelerometer response is utilized to determine the velocity of the impulse-wave propagation in the test specimens. The design and function of the suspension and coupling system are evaluated. The data reveal that the technique is applicable for analyzing the impulse-wave propagation in cylindrical specimens, biomechanical measurements, and modal analysis.

  10. An Experimental Study of Turbulent Skin Friction Reduction in Supersonic Flow Using a Microblowing Technique

    NASA Technical Reports Server (NTRS)

    Hwang, Danny P.

    1999-01-01

    A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).

  11. Plant micro- and nanomechanics: experimental techniques for plant cell-wall analysis.

    PubMed

    Burgert, Ingo; Keplinger, Tobias

    2013-11-01

    In the last few decades, micro- and nanomechanical methods have become increasingly important analytical techniques to gain deeper insight into the nanostructure and mechanical design of plant cell walls. The objective of this article is to review the most common micro- and nanomechanical approaches that are utilized to study primary and secondary cell walls from a biomechanics perspective. In light of their quite disparate functions, the common and opposing structural features of primary and secondary cell walls are reviewed briefly. A significant part of the article is devoted to an overview of the methodological aspects of the mechanical characterization techniques with a particular focus on new developments and advancements in the field of nanomechanics. This is followed and complemented by a review of numerous studies on the mechanical role of cellulose fibrils and the various matrix components as well as the polymer interactions in the context of primary and secondary cell-wall function.

  12. Experimental Comparison of the Hemodynamic Effects of Bifurcating Coronary Stent Implantation Techniques

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Vlachos, Pavlos; AETheR Lab Team

    2015-11-01

    Stent implantation in coronary bifurcations imposes unique effects to the blood flow patterns and currently there is no universally accepted stent deployment approach. Despite the fact that stent-induced changes can greatly alter clinical outcomes, no concrete understanding exists regarding the hemodynamic effects of each implantation method. This work presents an experimental evaluation of the hemodynamic differences between implantation techniques. We used four common stent implantation methods including the currently preferred one-stent provisional side branch (PSB) technique and the crush (CRU), Culotte (CUL), and T-stenting (T-PR) two-stent techniques, all deployed by a cardiologist in coronary models. Particle image velocimetry was used to obtain velocity and pressure fields. Wall shear stress (WSS), oscillatory shear index, residence times, and drag and compliance metrics were evaluated and compared against an un-stented case. The results of this study demonstrate that while PSB is preferred, both it and T-PR yielded detrimental hemodynamic effects such as low WSS values. CRU provided polarizing and unbalanced results. CUL demonstrated a symmetric flow field, balanced WSS distribution, and ultimately the most favorable hemodynamic environment.

  13. Experimental techniques for ballistic pressure measurements and recent development in means of calibration

    NASA Astrophysics Data System (ADS)

    Elkarous, L.; Coghe, F.; Pirlot, M.; Golinval, J. C.

    2013-09-01

    This paper presents a study carried out with the commonly used experimental techniques of ballistic pressure measurement. The comparison criteria were the peak chamber pressure and its standard deviation inside specific weapon/ammunition system configurations. It is impossible to determine exactly how precise either crusher, direct or conformal transducer methods are, as there is no way to know exactly what the actual pressure is; Nevertheless, the combined use of these measuring techniques could improve accuracy. Furthermore, a particular attention has been devoted to the problem of calibration. Calibration of crusher gauges and piezoelectric transducers is paramount and an essential task for a correct determination of the pressure inside a weapon. This topic has not been completely addressed yet and still requires further investigation. In this work, state of the art calibration methods are presented together with their specific aspects. Many solutions have been developed to satisfy this demand; nevertheless current systems do not cover the whole range of needs, calling for further development effort. In this work, research being carried out for the development of suitable practical calibration methods will be presented. The behavior of copper crushers under different high strain rates by the use of the Split Hopkinson Pressure Bars (SHPB) technique is investigated in particular. The Johnson-Cook model was employed as suitable model for the numerical study using FEM code

  14. Experimental Study of Active Techniques for Blade/Vortex Interaction Noise Reduction

    NASA Astrophysics Data System (ADS)

    Kobiki, Noboru; Murashige, Atsushi; Tsuchihashi, Akihiko; Yamakawa, Eiichi

    This paper presents the experimental results of the effect of Higher Harmonic Control (HHC) and Active Flap on the Blade/Vortex Interaction (BVI) noise. Wind tunnel tests were performed with a 1-bladed rotor system to evaluate the simplified BVI phenomenon avoiding the complicated aerodynamic interference which is characteristically and inevitably caused by a multi-bladed rotor. Another merit to use this 1-bladed rotor system is that the several objective active techniques can be evaluated under the same condition installed in the same rotor system. The effects of the active techniques on the BVI noise reduction were evaluated comprehensively by the sound pressure, the blade/vortex miss distance obtained by Laser light Sheet (LLS), the blade surface pressure distribution and the tip vortex structure by Particle Image Velocimetry (PIV). The correlation among these quantities to describe the effect of the active techniques on the BVI conditions is well obtained. The experiments show that the blade/vortex miss distance is more dominant for BVI noise than the other two BVI governing factors, such as blade lift and vortex strength at the moment of BVI.

  15. Experimental investigation of a bubbly flow by means of an image analysis technique

    SciTech Connect

    Schmidl, W.D.; Hassan, Y.A.; Ortiz-Villafuerte, J.

    1996-12-31

    Particle Image Velocimetry (PIV) is a non-intrusive measurement technique, which can be used to study the structure of various fluid flows. PIV is used to measure the time varying full field velocity data of a particle-seeded flow field within either a two-dimensional plane or three-dimensional volume. PIV is a very efficient measurement technique since it can obtain both qualitative and quantitative spatial information about the flow field being studied. This information can be further processed into information such as vorticity and pathlines. Other flow measurement techniques (Laser Doppler Velocimetry, Hot Wire Anemometry, etc...) only provide quantitative information at a single point. PIV can be used to study turbulence structures if a sufficient amount of data can be acquired and analyzed, and it can also be extended to study two-phase flows if both phases can be distinguished. In this study, the flow structure around a bubble rising in a pipe filled with water was studied in three-dimensions. The velocity of the rising bubble and the velocity field of the surrounding water was measured. Then the turbulence intensities and Reynolds stresses were calculated from the experimental data.

  16. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, Louis J.

    1989-01-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.

  17. Experimental Guidelines for Studies Designed to Investigate the Impact of Antioxidant Supplementation on Exercise Performance

    PubMed Central

    Powers, Scott K.; Smuder, Ashley J.; Kavazis, Andreas N.; Hudson, Matthew B.

    2010-01-01

    Research interest in the effects of antioxidants on exercise-induced oxidative stress and human performance continues to grow as new scientists enter this field. Consequently, there is a need to establish an acceptable set of criteria for monitoring antioxidant capacity and oxidative damage in tissues. Numerous reports have described a wide range of assays to detect both antioxidant capacity and oxidative damage to biomolecules, but many techniques are not appropriate in all experimental conditions. Here, the authors present guidelines for selecting and interpreting methods that can be used by scientists to investigate the impact of antioxidants on both exercise performance and the redox status of tissues. Moreover, these guidelines will be useful for reviewers who are assigned the task of evaluating studies on this topic. The set of guidelines contained in this report is not designed to be a strict set of rules, because often the appropriate procedures depend on the question being addressed and the experimental model. Furthermore, because no individual assay is guaranteed to be the most appropriate in every experimental situation, the authors strongly recommend using multiple assays to verify a change in biomarkers of oxidative stress or redox balance. PMID:20190346

  18. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided.

  19. Design Techniques for Uniform-DFT, Linear Phase Filter Banks

    NASA Technical Reports Server (NTRS)

    Sun, Honglin; DeLeon, Phillip

    1999-01-01

    Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.

  20. Better Than a Petaflop: The Power of Efficient Experimental Design

    DTIC Science & Technology

    2011-12-01

    2004. “Data farming: Discovering surprise”. In Proceedings of the 2004 Winter Simulation Conference, edited by R. G. Ingalls, M. D. Rossetti , J. S...efficient experimental design”. In Proceedings of the 2009 Winter Simulation Conference, edited by M. D. Rossetti , R. R. Hill, B. Johansson, A. Dunkin

  1. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  2. Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques

    DTIC Science & Technology

    2013-03-01

    simply consists of an NMOS transistor (Q) and a memristor. When the input Vin is low, the transistor Q is turned off. Thus, the output Vout is...connected to ground through the memristor. Conversely, when Vin is high, turning Q on, the memristance M and the equivalent transistor resistance (RQ...synapse design was dependent on the equivalent resistance (effectively, the size) of the Q transistor (RQ). A larger Q would offer a wider range of Vout

  3. Utilizing numerical techniques in turbofan inlet acoustic suppressor design

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1982-01-01

    Numerical theories in conjunction with previously published analytical results are used to augment current analytical theories in the acoustic design of a turbofan inlet nacelle. In particular, a finite element-integral theory is used to study the effect of the inlet lip radius on the far field radiation pattern and to determine the optimum impedance in an actual engine environment. For some single mode JT15D data, the numerical theory and experiment are found to be in a good agreement.

  4. Fungal mediated silver nanoparticle synthesis using robust experimental design and its application in cotton fabric

    NASA Astrophysics Data System (ADS)

    Velhal, Sulbha Girish; Kulkarni, S. D.; Latpate, R. V.

    2016-09-01

    Among the different methods employed for the synthesis of nanoparticles, the biological method is most favorable and quite well established. In microorganisms, use of fungi in the biosynthesis of silver nanoparticles has a greater advantage over other microbial mediators. In this study, intracellular synthesis of silver nanoparticles from Aspergillus terrerus (Thom) MTCC632 was carried out. We observed that synthesis of silver nanoparticles depended on factors such as temperature, amount of biomass and concentration of silver ions in the reaction mixture. Hence, optimization of biosynthesis using these parameters was carried out using statistical tool `robust experimental design'. Size and morphology of synthesized nanoparticles were determined using X-ray diffraction technique, field emission scanning electron microscopy, energy dispersion spectroscopy, and transmission electron microscopy. Nano-embedded cotton fabric was further prepared and studied for its antibacterial properties.

  5. Computer graphics techniques for aircraft EMC analysis and design

    NASA Astrophysics Data System (ADS)

    Kubina, S. J.; Bhartia, P.

    1983-10-01

    A comprehensive computer-aided system for the prediction of the potential interaction between avionics systems, with special emphasis on antenna-to-antenna coupling, is described. The methodology is applicable throughout the life cycle of an avionic/weapon system, including system upgrades and retrofits. As soon as aircraft geometry and preliminary systems information becomes available, the computer codes can be used to selectively display proposed antenna locations, emitter/receptor response characteristics, electromagnetic interference (EMI) margins and the actual ray-optical paths of maximum antenna-antenna coupling for each potential interacting antenna set. Antennas can be interactively relocated by track-ball (or joystick) and the analysis repeated at will for optimization or installation design study purposes. The codes can significantly simplify the task of the designer/analyst in effectively identifying critical interactions among an overwhelming large set of potential ones. In addition, it is an excellent design, development and analysis tool which simultaneously identifies both numerically and pictorially the EMI interdependencies among subsystems.

  6. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  7. 78 FR 79622 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... Threatened Species: Designation of a Nonessential Experimental Population of Central Valley Spring-Run...), designate a nonessential experimental population of Central Valley spring-run Chinook salmon (Oncorhynchus... Valley spring-run Chinook salmon (hereafter, CV spring-run Chinook salmon) to the San Joaquin...

  8. Web-Based Learning Support for Experimental Design in Molecular Biology: A Top-Down Approach

    ERIC Educational Resources Information Center

    Aegerter-Wilmsen, Tinri; Hartog, Rob; Bisseling, Ton

    2003-01-01

    An important learning goal of a molecular biology curriculum is the attainment of a certain competence level in experimental design. Currently, undergraduate students are confronted with experimental approaches in textbooks, lectures and laboratory courses. However, most students do not reach a satisfactory level of competence in the designing of…

  9. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  10. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    loudspeakers. One of the pre-recorded voices was a simulated tactical operating commander ( TOC ) who provided the mission directives a Commander would...expect on a patrol mission. One of the experimenters also operated a soundboard with controls to activate pre-recorded TOC responses, facilitating...simulated interactions between the TOC and Commander; for example, one button allowed the TOC to respond “Roger” when the Commander called in mission

  11. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    PubMed Central

    Eriksson, Tobias J. R.; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N.; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio (SNR)≃15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  12. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  13. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    SciTech Connect

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  14. Comparison of visibility measurement techniques for forklift truck design factors.

    PubMed

    Choi, Chin-Bong; Park, Peom; Kim, Young-Ho; Susan Hallbeck, M; Jung, Myung-Chul

    2009-03-01

    This study applied the light bulb shadow test, a manikin vision assessment test, and an individual test to a forklift truck to identify forklift truck design factors influencing visibility. The light bulb shadow test followed the standard of ISO/DIS 13564-1 for traveling and maneuvering tests with four test paths (Test Nos. 1, 3, 4, and 6). Digital human and forklift truck models were developed for the manikin vision assessment test with CATIA V5R13 human modeling solutions. Six participants performed the individual tests. Both employed similar parameters to the light bulb shadow test. The individual test had better visibility with fewer numbers and a greater distribution of the shadowed grids than the other two tests due to eye movement and anthropometric differences. The design factors of load backrest extension, lift chain, hose, dashboard, and steering wheel should be the first factors considered to improve visibility, especially when a forklift truck mainly performs a forward traveling task in an open area.

  15. Photon spectra calculation for an Elekta linac beam using experimental scatter measurements and Monte Carlo techniques.

    PubMed

    Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G

    2008-01-01

    The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%.

  16. Experimental studies of injection-stream turbulence on film cooling using a short-duration technique

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Tsou, Fu-Kang

    1986-12-01

    A short-duration technique was developed to study experimentally the effect of injection-stream turbulence on film cooling downstream of a two-dimensional slot, utilizing the main flow produced behind the expansion waves in an 11-m long wind tunnel. Hot-wire measurements of temperatures, velocities, and turbulence characteristics were conducted to demonstrate the mixing process between the main and injection streams. Heat fluxes with constant wall temperature boundary conditions were measured. Film cooling effectiveness and heat transfer coefficients were then determined using the superposition method for constant property flows. A three region-model was used to correlate the heat transfer data. Results show that increases in the injection-turbulence intensities not only reduce the film cooling effectiveness, but also increase the heat transfer coefficients in the potential-core and main regions.

  17. Experimental analysis of mechanical response of stabilized occipitocervical junction by 3D mark tracking technique

    NASA Astrophysics Data System (ADS)

    Germaneau, A.; Doumalin, P.; Dupré, J. C.; Brèque, C.; Brémand, F.; D'Houtaud, S.; Rigoard, P.

    2010-06-01

    This study is about a biomechanical comparison of some stabilization solutions for the occipitocervical junction. Four kinds of occipito-cervical fixations are analysed in this work: lateral plates fixed by two kinds of screws, lateral plates fixed by hooks and median plate. To study mechanical rigidity of each one, tests have been performed on human skulls by applying loadings and by studying mechanical response of fixations and bone. For this experimental analysis, a specific setup has been developed to impose a load corresponding to the flexion-extension physiological movements. 3D mark tracking technique is employed to measure 3D displacement fields on the bone and on the fixations. Observations of displacement evolution on the bone according to the fixation show different rigidities given by each solution.

  18. Experimental observation of silver and gold penetration into dental ceramic by means of a radiotracer technique

    SciTech Connect

    Moya, F.; Payan, J.; Bernardini, J.; Moya, E.G.

    1987-12-01

    A radiotracer technique was used to study silver and gold diffusion into dental porcelain under experimental conditions close to the real conditions in prosthetic laboratories for porcelain bakes. It was clearly shown that these non-oxidizable elements were able to diffuse into the ceramic as well as oxidizable ones. The penetration depth varied widely according to the element. The ratio DAg/DAu was about 10(3) around 850 degrees C. In contrast to gold, the silver diffusion rate was high enough to allow silver, from the metallic alloy, to be present at the external ceramic surface after diffusion into the ceramic. Hence, the greening of dental porcelains baked on silver-rich alloys could be explained mainly by a solid-state diffusion mechanism.

  19. Characterization of Hardening by Design Techniques on Commercial, Small Feature Sized Field-Programmable Gate Arrays

    DTIC Science & Technology

    2009-03-01

    AFIT/GE/ENG/09-43 CHARACTERIZATION OF HARDENING BY DESIGN TECHNIQUES ON COMMERCIAL, SMALL FEATURE SIZED FIELD-PROGRAMMABLE GATE ARRAYS THESIS...The purpose of which is to determine the radiation effects and characterize the improvements of various hardening by design techniques. The...Distributed RAM memory elements that are loaded both with ECC and non-error corrected data. The circuit is designed to check for errors in memory data, stuck

  20. Development of experimental verification techniques for non-linear deformation and fracture.

    SciTech Connect

    Moody, Neville Reid; Bahr, David F.

    2003-12-01

    This project covers three distinct features of thin film fracture and deformation in which the current experimental technique of nanoindentation demonstrates limitations. The first feature is film fracture, which can be generated either by nanoindentation or bulge testing thin films. Examples of both tests will be shown, in particular oxide films on metallic or semiconductor substrates. Nanoindentations were made into oxide films on aluminum and titanium substrates for two cases; one where the metal was a bulk (effectively single crystal) material and the other where the metal was a 1 pm thick film grown on a silica or silicon substrate. In both cases indentation was used to produce discontinuous loading curves, which indicate film fracture after plastic deformation of the metal. The oxides on bulk metals fractures occurred at reproducible loads, and the tensile stress in the films at fracture were approximately 10 and 15 GPa for the aluminum and titanium oxides respectively. Similarly, bulge tests of piezoelectric oxide films have been carried out and demonstrate film fracture at stresses of only 100's of MPa, suggesting the importance of defects and film thickness in evaluating film strength. The second feature of concern is film adhesion. Several qualitative and quantitative tests exist today that measure the adhesion properties of thin films. A relatively new technique that uses stressed overlayers to measure adhesion has been proposed and extensively studied. Delamination of thin films manifests itself in the form of either telephone cord or straight buckles. The buckles are used to calculate the interfacial fracture toughness of the film-substrate system. Nanoindentation can be utilized if more energy is needed to initiate buckling of the film system. Finally, deformation in metallic systems can lead to non-linear deformation due to 'bursts' of dislocation activity during nanoindentation. An experimental study to examine the structure of dislocations around

  1. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  2. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  3. New Materials Design Through Friction Stir Processing Techniques

    SciTech Connect

    Buffa, G.; Fratini, L.; Shivpuri, R.

    2007-04-07

    Friction Stir Welding (FSW) has reached a large interest in the scientific community and in the last years also in the industrial environment, due to the advantages of such solid state welding process with respect to the classic ones. The complex material flow occurring during the process plays a fundamental role in such solid state welding process, since it determines dramatic changes in the material microstructure of the so called weld nugget, which affects the effectiveness of the joints. What is more, Friction Stir Processing (FSP) is mainly being considered for producing high-strain-rate-superplastic (HSRS) microstructure in commercial aluminum alloys. The aim of the present research is the development of a locally composite material through the Friction Stir Processing (FSP) of two AA7075-T6 blanks and a different material insert. The results of a preliminary experimental campaign, carried out at the varying of the additional material placed at the sheets interface under different conditions, are presented. Micro and macro observation of the such obtained joints permitted to investigate the effects of such process on the overall joint performance.

  4. Design on intelligent gateway technique in home network

    NASA Astrophysics Data System (ADS)

    Hu, Zhonggong; Feng, Xiancheng

    2008-12-01

    Based on digitization, multimedia, mobility, wide band, real-time interaction and so on,family networks, because can provide diverse and personalized synthesis service in information, correspondence work, entertainment, education and health care and so on, are more and more paid attention by the market. The family network product development has become the focus of the related industry. In this paper,the concept of the family network and the overall reference model of the family network are introduced firstly.Then the core techniques and the correspondence standard related with the family network are proposed.The key analysis is made for the function of family gateway, the function module of the software,the key technologies to client side software architecture and the trend of development of the family network entertainment seeing and hearing service and so on. Product present situation of the family gateway and the future trend of development, application solution of the digital family service are introduced. The development of the family network product bringing about the digital family network industry is introduced finally.It causes the development of software industries,such as communication industry,electrical appliances industry, computer and game and so on.It also causes the development of estate industry.

  5. Development of experimental techniques to study protein and nucleic acid structures

    SciTech Connect

    Trewhella, J.; Bradbury, E.M.; Gupta, G.; Imai, B.; Martinez, R.; Unkefer, C.

    1996-04-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This research project sought to develop experimental tools for structural biology, specifically those applicable to three-dimensional, biomolecular-structure analysis. Most biological systems function in solution environments, and the ability to study proteins and polynucleotides under physiologically relevant conditions is of paramount importance. The authors have therefore adopted a three-pronged approach which involves crystallographic and nuclear magnetic resonance (NMR) spectroscopic methods to study protein and DNA structures at high (atomic) resolution as well as neutron and x-ray scattering techniques to study the complexes they form in solution. Both the NMR and neutron methods benefit from isotope labeling strategies, and all provide experimental data that benefit from the computational and theoretical tools being developed. The authors have focused on studies of protein-nucleic acid complexes and DNA hairpin structures important for understanding the regulation of gene expression, as well as the fundamental interactions that allow these complexes to form.

  6. Combustion behavior of single coal-water slurry droplets, Part 1: Experimental techniques

    SciTech Connect

    Levendis, Y.A.; Metghalchi, M.; Wise, D.

    1991-12-31

    Techniques to produce single droplets of coal-water slurries have been developed in order to study the combustion behavior of the slurries. All stages of slurry combustion are of interest to the present study, however, emphasis will be given to the combustion of the solid agglomerate char which remains upon the termination of the water evaporation and the devolatilization periods. An experimental facility is under construction where combustion of coal-water slurries will be monitored in a variety of furnace temperatures and oxidizing atmospheres. The effect of the initial size of the slurry droplet and the solids loading (coal to water ratio) will be investigated. A drop tube, laminar flow furnace coupled to a near-infrared, ratio pyrometer win be used to monitor temperature-time histories of single particles from ignition to extinction. This paper describes the experimental built-up to this date and presents results obtained by numerical analysis that help understanding the convective and radiating environment in the furnace.

  7. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    SciTech Connect

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the development of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.

  8. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed…

  9. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  10. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  11. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  12. A review of experimental techniques to produce a nacre-like structure.

    PubMed

    Corni, I; Harvey, T J; Wharton, J A; Stokes, K R; Walsh, F C; Wood, R J K

    2012-09-01

    The performance of man-made materials can be improved by exploring new structures inspired by the architecture of biological materials. Natural materials, such as nacre (mother-of-pearl), can have outstanding mechanical properties due to their complicated architecture and hierarchical structure at the nano-, micro- and meso-levels which have evolved over millions of years. This review describes the numerous experimental methods explored to date to produce composites with structures and mechanical properties similar to those of natural nacre. The materials produced have sizes ranging from nanometres to centimetres, processing times varying from a few minutes to several months and a different range of mechanical properties that render them suitable for various applications. For the first time, these techniques have been divided into those producing bulk materials, coatings and free-standing films. This is due to the fact that the material's application strongly depends on its dimensions and different results have been reported by applying the same technique to produce materials with different sizes. The limitations and capabilities of these methodologies have been also described.

  13. Experimental characterization of sulfate damage of concrete based on the harmonic wave modulation technique

    NASA Astrophysics Data System (ADS)

    Yin, Tingyuan; Meng, Wanlin; Talebzadeh, Neda; Chen, Jun

    2017-02-01

    The objective of this paper is to characterize cracking progression of concrete samples subjected to sulfate attack cycles by employment of a nonlinear wave modulation technique. The sidebands in frequency domain (f1±f2) are produced due to the modulation of two ultrasonic waves (high frequency f1 and low frequency f2) and the relative amplitude of sidebands is defined as the nonlinear parameter considered as a caliber for structural damage. Different from previous work where the low frequency signal was generated by the instrumented hammer, the low frequency signal in this research is a harmonic wave produced by an electromagnetic exciter to avoid the uncertainty of man-made influence. Experimental results show that the nonlinear parameter presents an excellent correlation with the progress of material deterioration, indicating that the wave modulation method is capable of discriminating different states of damage. The work validates the feasibility and sensitivity of nonlinear wave modulation technique based on harmonic signals for the damage detection of concrete materials suffered from typical durability problems.

  14. An experimental validation method for questioning techniques that assess sensitive issues.

    PubMed

    Moshagen, Morten; Hilbig, Benjamin E; Erdfelder, Edgar; Moritz, Annie

    2014-01-01

    Studies addressing sensitive issues often yield distorted prevalence estimates due to socially desirable responding. Several techniques have been proposed to reduce this bias, including indirect questioning, psychophysiological lie detection, and bogus pipeline procedures. However, the increase in resources required by these techniques is warranted only if there is a substantial increase in validity as compared to direct questions. Convincing demonstration of superior validity necessitates the availability of a criterion reflecting the "true" prevalence of a sensitive attribute. Unfortunately, such criteria are notoriously difficult to obtain, which is why validation studies often proceed indirectly by simply comparing estimates obtained with different methods. Comparative validation studies, however, provide weak evidence only since the exact increase in validity (if any) remains unknown. To remedy this problem, we propose a simple method that allows for measuring the "true" prevalence of a sensitive behavior experimentally. The basic idea is to elicit normatively problematic behavior in a way that ensures conclusive knowledge of the prevalence rate of this behavior. This prevalence measure can then serve as an external validation criterion in a second step. An empirical demonstration of this method is provided.

  15. An experimental comparative study of 20 Italian opera houses: Measurement techniques

    NASA Astrophysics Data System (ADS)

    Farina, Angelo; Armelloni, Enrico; Martignon, Paolo

    2004-05-01

    For ``acoustical photography'' we mean a set of measured impulse responses, which enable us to ``listen'' at the measured room by means of advanced auralization methods. Once these data sets have been measured, they can be employed in two different ways: objective analysis and listening test. In fact, it is possible to compute dozens of acoustical objective parameters, describing the temporal texture, the spatial effect and the frequency-domain coloring of each opera house. On the other hand, by means of the auralization technique, it becomes easy to conduct listening experiments with human subjects. This paper focuses principally on the development and specification of the measurement technique, which is the topic assigned to the research unit of Parma, to which the authors belong. It describes the hardware equipment, the software, the electro-acoustic transducers (microphones and loudspeakers), the measurement positions, the system for automatic displacement of the microphones and the conditions of the room during the measurements. Experimental results are reported about a couple of opera houses which were employed for testing the measurement procedure and showing the benefits of the new method against the previously employed ones.

  16. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    NASA Astrophysics Data System (ADS)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  17. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  18. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design.

  19. Experimental Techniques for Evaluating the Effects of Aging on Impact and High Strain Rate Properties of Triaxial Braided Composite Materials

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Roberts, Gary D.; Ruggeri, Charles R.; Gilat, Amos; Matrka, Thomas

    2010-01-01

    An experimental program is underway to measure the impact and high strain rate properties of triaxial braided composite materials and to quantify any degradation in properties as a result of thermal and hygroscopic aging typically encountered during service. Impact tests are being conducted on flat panels using a projectile designed to induce high rate deformation similar to that experienced in a jet engine fan case during a fan blade-out event. The tests are being conducted on as-fabricated panels and panels subjected to various numbers of aging cycles. High strain rate properties are being measured using a unique Hopkinson bar apparatus that has a larger diameter than conventional Hopkinson bars. This larger diameter is needed to measure representative material properties because of the large unit cell size of the materials examined in this work. In this paper the experimental techniques used for impact and high strain rate testing are described and some preliminary results are presented for both as-fabricated and aged composites.

  20. Flight control design using a blend of modern nonlinear adaptive and robust techniques

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolong

    In this dissertation, the modern control techniques of feedback linearization, mu synthesis, and neural network based adaptation are used to design novel control laws for two specific applications: F/A-18 flight control and reusable launch vehicle (an X-33 derivative) entry guidance. For both applications, the performance of the controllers is assessed. As a part of a NASA Dryden program to develop and flight test experimental controllers for an F/A-18 aircraft, a novel method of combining mu synthesis and feedback linearization is developed to design longitudinal and lateral-directional controllers. First of all, the open-loop and closed-loop dynamics of F/A-18 are investigated. The production F/A-18 controller as well as the control distribution mechanism are studied. The open-loop and closed-loop handling qualities of the F/A-18 are evaluated using low order transfer functions. Based on this information, a blend of robust mu synthesis and feedback linearization is used to design controllers for a low dynamic pressure envelope of flight conditions. For both the longitudinal and the lateral-directional axes, a robust linear controller is designed for a trim point in the center of the envelope. Then by including terms to cancel kinematic nonlinearities and variations in the aerodynamic forces and moments over the flight envelope, a complete nonlinear controller is developed. In addition, to compensate for the model uncertainty, linearization error and variations between operating points, neural network based adaptation is added to the designed longitudinal controller. The nonlinear simulations, robustness and handling qualities analysis indicate that the performance is similar to or better than that for the production F/A-18 controllers. When the dynamic pressure is very low, the performance of both the experimental and the production flight controllers is degraded, but Level I handling qualities are still achieved. A new generation of Reusable Launch Vehicles

  1. Improved Experimental Techniques for Analyzing Nucleic Acid Transport Through Protein Nanopores in Planar Lipid Bilayers

    NASA Astrophysics Data System (ADS)

    Costa, Justin A.

    The translocation of nucleic acid polymers across cell membranes is a fundamental requirement for complex life and has greatly contributed to genomic molecular evolution. The diversity of pathways that have evolved to transport DNA and RNA across membranes include protein receptors, active and passive transporters, endocytic and pinocytic processes, and various types of nucleic acid conducting channels known as nanopores. We have developed a series of experimental techniques, collectively known as "Wicking", that greatly improves the biophysical analysis of nucleic acid transport through protein nanopores in planar lipid bilayers. We have verified the Wicking method using numerous types of classical ion channels including the well-studied chloride selective channel, CLIC1. We used the Wicking technique to reconstitute α-hemolysin and found that DNA translocation events of types A and B could be routinely observed using this method. Furthermore, measurable differences were observed in the duration of blockade events as DNA length and composition was varied, consistent with previous reports. Finally, we tested the ability of the Wicking technology to reconstitute the dsRNA transporter Sid-1. Exposure to dsRNAs of increasing length and complexity showed measurable differences in the current transitions suggesting that the charge carrier was dsRNA. However, the translocation events occurred so infrequently that a meaningful electrophysiological analysis was not possible. Alterations in the lipid composition of the bilayer had a minor effect on the frequency of translocation events but not to such a degree as to permit rigorous statistical analysis. We conclude that in many instances the Wicking method is a significant improvement to the lipid bilayer technique, but is not an optimal method for analyzing transport through Sid-1. Further refinements to the Wicking method might have future applications in high throughput DNA sequencing, DNA computation, and

  2. A validated spectrofluorimetric method for the determination of nifuroxazide through coumarin formation using experimental design

    PubMed Central

    2013-01-01

    Background Nifuroxazide (NF) is an oral nitrofuran antibiotic, having a wide range of bactericidal activity against gram positive and gram negative enteropathogenic organisms. It is formulated either in single form, as intestinal antiseptic or in combination with drotaverine (DV) for the treatment of gastroenteritis accompanied with gastrointestinal spasm. Spectrofluorimetry is a convenient and sensitive technique for pharmaceutical quality control. The new proposed spectrofluorimetric method allows its determination either in single form or in binary mixture with DV. Furthermore, experimental conditions were optimized using the new approach: Experimental design, which has many advantages over the old one, one variable at a time (OVAT approach). Results A novel and sensitive spectrofluorimetric method was designed and validated for the determination of NF in pharmaceutical formulation. The method was based upon the formation of a highly fluorescent coumarin compound by the reaction between NF and ethylacetoacetate (EAA) using sulfuric acid as catalyst. The fluorescence was measured at 390 nm upon excitation at 340 nm. Experimental design was used to optimize experimental conditions. Volumes of EAA and sulfuric acid, temperature and heating time were considered the critical factors to be studied in order to establish an optimum fluorescence. Each two factors were co-tried at three levels. Regression analysis revealed good correlation between fluorescence intensity and concentration over the range 20–400 ng ml-1. The suggested method was successfully applied for the determination of NF in pure and capsule forms. The procedure was validated in terms of linearity, accuracy, precision, limit of detection and limit of quantification. The selectivity of the method was investigated by analysis of NF in presence of the co-mixed drug DV where no interference was observed. The reaction pathway was suggested and the structure of the fluorescent product was proposed

  3. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  4. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  5. MSE spectrograph optical design: a novel pupil slicing technique

    NASA Astrophysics Data System (ADS)

    Spanò, P.

    2014-07-01

    The Maunakea Spectroscopic Explorer shall be mainly devoted to perform deep, wide-field, spectroscopic surveys at spectral resolutions from ~2000 to ~20000, at visible and near-infrared wavelengths. Simultaneous spectral coverage at low resolution is required, while at high resolution only selected windows can be covered. Moreover, very high multiplexing (3200 objects) must be obtained at low resolution. At higher resolutions a decreased number of objects (~800) can be observed. To meet such high demanding requirements, a fiber-fed multi-object spectrograph concept has been designed by pupil-slicing the collimated beam, followed by multiple dispersive and camera optics. Different resolution modes are obtained by introducing anamorphic lenslets in front of the fiber arrays. The spectrograph is able to switch between three resolution modes (2000, 6500, 20000) by removing the anamorphic lenses and exchanging gratings. Camera lenses are fixed in place to increase stability. To enhance throughput, VPH first-order gratings has been preferred over echelle gratings. Moreover, throughput is kept high over all wavelength ranges by splitting light into more arms by dichroic beamsplitters and optimizing efficiency for each channel by proper selection of glass materials, coatings, and grating parameters.

  6. Design and experimental demonstration of optomechanical paddle nanocavities

    NASA Astrophysics Data System (ADS)

    Healey, Chris; Kaviani, Hamidreza; Wu, Marcelo; Khanaliloo, Behzad; Mitchell, Matthew; Hryciw, Aaron C.; Barclay, Paul E.

    2015-12-01

    We present the design, fabrication, and initial characterization of a paddle nanocavity consisting of a suspended sub-picogram nanomechanical resonator optomechanically coupled to a photonic crystal nanocavity. The optical and mechanical properties of the paddle nanocavity can be systematically designed and optimized, and the key characteristics including mechanical frequency can be easily tailored. Measurements under ambient conditions of a silicon paddle nanocavity demonstrate an optical mode with a quality factor Q o ˜ 6000 near 1550 nm and optomechanical coupling to several mechanical resonances with frequencies ω m / 2 π ˜ 12 - 64 MHz, effective masses m eff ˜ 350 - 650 fg, and mechanical quality factors Q m ˜ 44 - 327 . Paddle nanocavities are promising for optomechanical sensing and nonlinear optomechanics experiments.

  7. High-power CMUTs: design and experimental verification.

    PubMed

    Yamaner, F Yalçin; Olçum, Selim; Oğuz, H Kağan; Bozkurt, Ayhan; Köymen, Hayrettin; Atalar, Abdullah

    2012-06-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have great potential to compete with piezoelectric transducers in high-power applications. As the output pressures increase, nonlinearity of CMUT must be reconsidered and optimization is required to reduce harmonic distortions. In this paper, we describe a design approach in which uncollapsed CMUT array elements are sized so as to operate at the maximum radiation impedance and have gap heights such that the generated electrostatic force can sustain a plate displacement with full swing at the given drive amplitude. The proposed design enables high output pressures and low harmonic distortions at the output. An equivalent circuit model of the array is used that accurately simulates the uncollapsed mode of operation. The model facilities the design of CMUT parameters for high-pressure output, without the intensive need for computationally involved FEM tools. The optimized design requires a relatively thick plate compared with a conventional CMUT plate. Thus, we used a silicon wafer as the CMUT plate. The fabrication process involves an anodic bonding process for bonding the silicon plate with the glass substrate. To eliminate the bias voltage, which may cause charging problems, the CMUT array is driven with large continuous wave signals at half of the resonant frequency. The fabricated arrays are tested in an oil tank by applying a 125-V peak 5-cycle burst sinusoidal signal at 1.44 MHz. The applied voltage is increased until the plate is about to touch the bottom electrode to get the maximum peak displacement. The observed pressure is about 1.8 MPa with -28 dBc second harmonic at the surface of the array.

  8. Experimental Design for Stochastic Models of Nonlinear Signaling Pathways Using an Interval-Wise Linear Noise Approximation and State Estimation

    PubMed Central

    Zimmer, Christoph

    2016-01-01

    Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802

  9. A propagation effects handbook for satellite systems design. A summary of propagation impairments on 10-100 GHz satellite links, with techniques for system design. [tropospheric scattering

    NASA Technical Reports Server (NTRS)

    Kaul, R.; Wallace, R.; Kinal, G.

    1980-01-01

    This handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. Rain systems, rain and attenuation models, depolarization and experimental data are described. The design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. The questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results are addressed in order to bridge the gap between the propagation research data and the classical link budget analysis of Earth-space communications system.

  10. Implementation of multivariable control techniques with application to Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. . Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. )

    1990-06-01

    After several successful applications to aerospace industry, the modern control theory methods have recently attracted many control engineers from other engineering disciplines. For advanced nuclear reactors, the modern control theory may provide major advantages in safety, availability, and economic aspects. This report is intended to illustrate the feasibility of applying the linear quadratic Gaussian (LQG) compensator in nuclear reactor applications. The LQG design is compared with the existing classical control schemes. Both approaches are tested using the Experimental Breeder Reactor 2 (EBR-2) as the system. The experiments are performed using a mathematical model of the EBR-2 plant. Despite the fact that the controller and plant models do not include all known physical constraints, the results are encouraging. This preliminary study provides an informative, introductory picture for future considerations of using modern control theory methods in nuclear industry. 10 refs., 25 figs.

  11. An experimental technique for performing 3-D LDA measurements inside whirling annular seals

    NASA Technical Reports Server (NTRS)

    Morrison, Gerald L.; Johnson, Mark C.; Deotte, Robert E., Jr.; Thames, H. Davis, III.; Wiedner, Brian G.

    1992-01-01

    During the last several years, the Fluid Mechanics Division of the Turbomachinery Laboratory at Texas A&M University has developed a rather unique facility with the experimental capability for measuring the flow field inside journal bearings, labyrinth seals, and annular seals. The facility consists of a specially designed 3-D LDA system which is capable of measuring the instantaneous velocity vector within 0.2 mm of a wall while the laser beams are aligned almost perpendicular to the wall. This capability was required to measure the flow field inside journal bearings, labyrinth seals, and annular seals. A detailed description of this facility along with some representative results obtained for a whirling annular seal are presented.

  12. Gladstone-Dale constant for CF4. [experimental design

    NASA Technical Reports Server (NTRS)

    Burner, A. W., Jr.; Goad, W. K.

    1980-01-01

    The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.

  13. Quasi-experimental study designs series - Paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Cara Ebert; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-03-29

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research - in particular for the evaluation of healthcare practice, programs and policy - because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions.

  14. Spray drying formulation of albendazole microspheres by experimental design. In vitro-in vivo studies.

    PubMed

    García, Agustina; Leonardi, Darío; Piccirilli, Gisela N; Mamprin, María E; Olivieri, Alejandro C; Lamas, María C

    2015-02-01

    Both an experimental design and optimization techniques were carried out for the development of chitosan-pectin-carboxymethylcellulose microspheres to improve the oral absorption of albendazole as a model drug. The effect of three different factors (chitosan, pectin and carboxy methyl cellulose concentrations) was studied on five responses: yield, morphology, dissolution rate at 30 and 60 min, and encapsulation efficiency of the microspheres. During the screening phase, the factors were evaluated in order to identify those which exert a significant effect. Simultaneous multiple response optimizations were then used to find out experimental conditions where the system shows the most adequate results. The optimal conditions were found to be: chitosan concentration, 1.00% w/v, pectin concentration 0.10% w/v and carboxymethylcellulose concentration 0.20% w/v. The bioavailability of the loaded drug in the optimized microspheres was evaluated in Wistar rats which showed an area under curve (AUC) almost 10 times higher than the pure drug.

  15. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  16. Chapter 5: Methods and protocols in peripheral nerve regeneration experimental research: part II-morphological techniques.

    PubMed

    Raimondo, Stefania; Fornaro, Michele; Di Scipio, Federica; Ronchi, Giulia; Giacobini-Robecchi, Maria G; Geuna, Stefano

    2009-01-01

    This paper critically overviews the main procedures used for carrying out morphological analysis of peripheral nerve fibers in light, confocal, and electron microscopy. In particular, this paper emphasizes the importance of osmium tetroxide post-fixation as a useful procedure to be adopted independently from the embedding medium. In order to facilitate the use of any described techniques, all protocols are presented in full details. The pros and cons for each method are critically addressed and practical indications on the different imaging approaches are reported. Moreover, the basic rules of morpho-quantitative stereological analysis of nerve fibers are described addressing the important concepts of design-based sampling and the disector. Finally, a comparison of stereological analysis on myelinated nerve fibers between paraffin- and resin-embedded rat radial nerves is reported showing that different embedding procedures might influence the distribution of size parameters.

  17. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  18. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  19. Submillimeter Measurements of Photolysis Products in Interstellar Ice Analogs: A New Experimental Technique

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Weaver, Susanna Widicus

    2012-01-01

    Over 150 molecular species have been confirmed in space, primarily by their rotational spectra at millimeter/submillimeter wavelengths, which yield an unambiguous identification. Many of the known interstellar organic molecules cannot be explained by gas-phase chemistry. It is now presumed that they are produced by surface reactions of the simple ices and/or grains observed and released into the gas phase by sublimation, sputtering, etc. Additionally, the chemical complexity found in meteorites and samples returned from comets far surpasses that of the remote detections for the interstellar medium (ISM), comets, and planetary atmospheres. Laboratory simulations of interstellar/cometary ices have found, from the analysis of the remnant residue of the warmed laboratory sample, that such molecules are readily formed; however, it has yet to be determined if they are formed during the warm phase or within the ice during processing. Most analysis of the ice during processing reveals molecular changes, though the exact quantities and species formed are highly uncertain with current techniques due to overwhelming features of simple ices. Remote sensing with high resolution spectroscopy is currently the only method to detect trace species in the ISM and the primary method for comets and icy bodies in the Solar System due to limitations of sample return. We have recently designed an experiment to simulate interstellar/cometary/planetary ices and detect trace species employing the same techniques used for remote observations. Preliminary results will be presented.

  20. Experimental, computational, and analytical techniques for diagnosing breast cancer using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Palmer, Gregory M.

    This dissertation presents the results of an investigation into experimental, computational, and analytical methodologies for diagnosing breast cancer using fluorescence and diffuse reflectance spectroscopy. First, the optimal experimental methodology for tissue biopsy studies was determined using an animal study. It was found that the use of freshly excised tissue samples preserved the original spectral line shape and magnitude of the fluorescence and diffuse reflectance. Having established the optimal experimental methodology, a clinical study investigating the use of fluorescence and diffuse reflectance spectroscopy for the diagnosis of breast cancer was undertaken. In addition, Monte Carlo-based models of diffuse reflectance and fluorescence were developed and validated to interpret these data. These models enable the extraction of physically meaningful information from the measured spectra, including absorber concentrations, and scattering and intrinsic fluorescence properties. The model was applied to the measured spectra, and using a support vector machine classification algorithm based on physical features extracted from the diffuse reflectance spectra, it was found that breast cancer could be diagnosed with a cross-validated sensitivity and specificity of 82% and 92%, respectively, which are substantially better than that obtained using a conventional, empirical algorithm. It was found that malignant tissues had lower hemoglobin oxygen saturation, were more scattering, and had lower beta-carotene concentration, relative to the non-malignant tissues. It was also found that the fluorescence model could successfully extract the intrinsic fluorescence line shape from tissue samples. One limitation of the previous study is that a priori knowledge of the tissue's absorbers and scatterers is required. To address this limitation, and to improve upon the method with which fiber optic probes are designed, an alternate approach was developed. This method used a

  1. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  2. Fragment-based drug design: computational & experimental state of the art.

    PubMed

    Hoffer, Laurent; Renaud, Jean-Paul; Horvath, Dragos

    2011-07-01

    Fragment-based screening is an emerging technology which is used as an alternative to high-throughput screening (HTS), and often in parallel. Fragment screening focuses on very small compounds. Because of their small size and simplicity, fragments exhibit a low to medium binding affinity (mM to µM) and must therefore be screened at high concentration in order to detect binding events. Since some issues are associated with high-concentration screening in biochemical assays, biophysical methods are generally employed in fragment screening campaigns. Moreover, these techniques are very sensitive and some of them can give precise information about the binding mode of fragments, which facilitates the mandatory hit-to-lead optimization. One of the main advantages of fragment-based screening is that fragment hits generally exhibit a strong binding with respect to their size, and their subsequent optimization should lead to compounds with better pharmacokinetic properties compared to molecules evolved from HTS hits. In other words, fragments are interesting starting points for drug discovery projects. Besides, the chemical space of low-complexity compounds is very limited in comparison to that of drug-like molecules, and thus easier to explore with a screening library of limited size. Furthermore, the "combinatorial explosion" effect ensures that the resulting combinations of interlinked binding fragments may cover a significant part of "drug-like" chemical space. In parallel to experimental screening, virtual screening techniques, dedicated to fragments or wider compounds, are gaining momentum in order to further reduce the number of compounds to test. This article is a review of the latest news in both experimental and in silico virtual screening in the fragment-based discovery field. Given the specificity of this journal, special attention will be given to fragment library design.

  3. Design and testing of 15kv to 35kv porcelain terminations using new connection techniques

    SciTech Connect

    Fox, J.W.; Hill, R.J.

    1982-07-01

    New techniques for conductor connection in underground cable terminations were investigated in a design of a new distribution class porcelain cable termination. Connections to the conductor were accomplished using set screws, building upon previous designs with additions to assure a conservative design approach. The connector design was tested according to applicable standards for load cycling of connections, and the result appears capable of conservative performance in the operating environment.

  4. Computational Design of Creep-Resistant Alloys and Experimental Validation in Ferritic Superalloys

    SciTech Connect

    Liaw, Peter

    2014-12-31

    A new class of ferritic superalloys containing B2-type zones inside parent L21-type precipitates in a disordered solid-solution matrix, also known as a hierarchical-precipitate strengthened ferritic alloy (HPSFA), has been developed for high-temperature structural applications in fossil-energy power plants. These alloys were designed by the addition of the Ti element into a previously-studied NiAl-strengthened ferritic alloy (denoted as FBB8 in this study). In the present research, systematic investigations, including advanced experimental techniques, first-principles calculations, and numerical simulations, have been integrated and conducted to characterize the complex microstructures and excellent creep resistance of HPSFAs. The experimental techniques include transmission-electron microscopy, scanningtransmission- electron microscopy, neutron diffraction, and atom-probe tomography, which provide detailed microstructural information of HPSFAs. Systematic tension/compression creep tests revealed that HPSFAs exhibit the superior creep resistance, compared with the FBB8 and conventional ferritic steels (i.e., the creep rates of HPSFAs are about 4 orders of magnitude slower than the FBB8 and conventional ferritic steels.) First-principles calculations include interfacial free energies, anti-phase boundary (APB) free energies, elastic constants, and impurity diffusivities in Fe. Combined with kinetic Monte- Carlo simulations of interdiffusion coefficients, and the integration of computational thermodynamics and kinetics, these calculations provide great understanding of thermodynamic and mechanical properties of HPSFAs. In addition to the systematic experimental approach and first-principles calculations, a series of numerical tools and algorithms, which assist in the optimization of creep properties of ferritic superalloys, are utilized and developed. These numerical simulation results are compared with the available experimental data and previous first

  5. Development and Experimental Verification of Key Techniques to Validate Remote Sensing Products

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, S. G.; Ge, Y.; Jin, R.; Liu, S. M.; Ma, M. G.; Shi, W. Z.; Li, R. X.; Liu, Q. H.

    2013-05-01

    Validation of remote sensing land products is a fundamental issue for Earth observation. Ministry of Science and Technology of the People's Republic of China (MOST) has launched a high-tech R&D Program named `Development and experimental verification of key techniques to validate remote sensing products' in 2011. This paper introduces the background, scientific objectives, research contents of this project and research result already achieved. The objectives of this project include (1) to build a technical specification for the validation of remote sensing products; (2) to investigate the performance, we will carry out a comprehensive remote sensing experiment on satellite - aircraft - ground truth and then modify Step 1 until reach the predefined requirement; (3) to establish a validation network of China for remote sensing products. In summer 2012, with support of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER), field observations have been successfully conducted in the central stream of the Heihe River Basin, a typical inland river basin in northwest China. A flux observation matrix composed of eddy covariance (EC) and large aperture scintillometer (LAS), in addition to a densely distributed eco-hydrological wireless sensor network have been established to capture multi-scale heterogeneities of evapotranspiration (ET), leaf area index (LAI), soil moisture and temperature. Airborne missions have been flown with the payloads of imaging spectrometer, light detection and ranging (LiDAR), infrared thermal imager and microwave radiometer that provide various scales of aerial remote sensing observations. Satellite images with high resolution have been collected and pre-processed, e.g. PROBA-CHRIS and TerraSAR-X. Simultaneously, ground measurements have been conducted over specific sampling plots and transects to obtain validation data sets. With this setup complex problems are addressed, e.g. heterogeneity, scaling, uncertainty, and eventually to

  6. Improved field experimental designs and quantitative evaluation of aquatic ecosystems

    SciTech Connect

    McKenzie, D.H.; Thomas, J.M.

    1984-05-01

    The paired-station concept and a log transformed analysis of variance were used as methods to evaluate zooplankton density data collected during five years at an electrical generation station on Lake Michigan. To discuss the example and the field design necessary for a valid statistical analysis, considerable background is provided on the questions of selecting (1) sampling station pairs, (2) experimentwise error rates for multi-species analyses, (3) levels of Type I and II error rates, (4) procedures for conducting the field monitoring program, and (5) a discussion of the consequences of violating statistical assumptions. Details for estimating sample sizes necessary to detect changes of a specified magnitude are included. Both statistical and biological problems with monitoring programs (as now conducted) are addressed; serial correlation of successive observations in the time series obtained was identified as one principal statistical difficulty. The procedure reduces this problem to a level where statistical methods can be used confidently. 27 references, 4 figures, 2 tables.

  7. A passive exoskeleton with artificial tendons: design and experimental evaluation.

    PubMed

    van Dijk, Wietse; van der Kooij, Herman; Hekman, Edsko

    2011-01-01

    We developed a passive exoskeleton that was designed to minimize joint work during walking. The exoskeleton makes use of passive structures, called artificial tendons, acting in parallel with the leg. Artificial tendons are elastic elements that are able to store and redistribute energy over the human leg joints. The elastic characteristics of the tendons have been optimized to minimize the mechanical work of the human leg joints. In simulation the maximal reduction was 40 percent. The performance of the exoskeleton was evaluated in an experiment in which nine subjects participated. Energy expenditure and muscle activation were measured during three conditions: Normal walking, walking with the exoskeleton without artificial tendons, and walking with the exoskeleton with the artificial tendons. Normal walking was the most energy efficient. While walking with the exoskeleton, the artificial tendons only resulted in a negligibly small decrease in energy expenditure.

  8. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  9. GMPLS-based multiterabit optical router: design and experimentation

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Zeng, QingJi; Ouyang, Yong; Liu, Jimin; Luo, Xuan; Huang, Xuejun

    2002-09-01

    Internet backbone network is undergoing a large-scale transformation from the current complex, static and multi-layer electronic-based architecture to the emerging simplified, dynamic and single-layer photonic-based architecture. The explosive growth in the Internet, multi-media services, and IP router links are demanding the next generation Internet that can accommodate the entire traffic in a cost-effective manner. There is a consensus in current industries that IP over WDM integration technologies will be viable for the next generation of the optical Internet where the simplified flat network architecture can facilitate the networking performance and the networking management. In this paper, we firstly propose a novel node architecture-Terabit Optical Router (TOR) for building the next generation optical Internet and analyzes each key function unit of TOR including multi-granularity electrical-optical hybrid switching fabrics, unified control plane unit and so on. Secondly, we discussed the unified control plane unit of TOR in detailed Thirdly we describe our cost vs. performance analysis for various application of TOR. According to our evaluation carriers can get a cost reduction of more than 60 percent by using the TOR. Finally, we reach conclusions that TORs rather than OBS or BFR(Big Fat Router) routers, a cost effective multi-granularity switching and routing technique, are feasible to build the next generation Internet.

  10. NCTR computer systems designed for toxicologic experimentation. I. Overview.

    PubMed

    Cranmer, M F; Lawrence, L R; Konvicka, A J; Herrick, S S

    1978-01-01

    Established in 1971 by the Food and Drug Administration (FDA) and the Environmental Protection Agency (EPA), the National Center for Toxicological Research is committed to the study of long-term, low-dose effects of potentially toxic substances, including carcinogens. The Scientific Information Systems Division (SISD) facility provides logistic support for complex experiments involving large numbers of test animals. Animal population at the Center, including the breeding colony and animals on experiment, can be as high as 80,000. Each animal must be accounted for, fed and watered under strict control, and continually observed. From birth to final examination, an individual animal might have as many as 3,000 individual elements of information associated with it. This paper introduces a series of reports dealing with an integrated and comprehensive system of experiment planning and implementation (including information gathering, classification, analyses and reporting), employing state-of-the-art data processing techniques. This extensive use of computer technology has permitted the collection, proper classification, and rapid retrieval of virtually error-free data, resulting in cost-sensitive experiment planning.

  11. Experimental Studies of Active and Passive Flow Control Techniques Applied in a Twin Air-Intake

    PubMed Central

    Joshi, Shrey; Jindal, Aman; Maurya, Shivam P.; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG. PMID:23935422

  12. Design and development of a novel nuclear magnetic resonance detection for the gas phase ions by magnetic resonance acceleration technique

    NASA Astrophysics Data System (ADS)

    Fuke, K.; Tona, M.; Fujihara, A.; Sakurai, M.; Ishikawa, H.

    2012-08-01

    Nuclear magnetic resonance (NMR) technique is a well-established powerful tool to study the physical and chemical properties of a wide range of materials. However, presently, NMR applications are essentially limited to materials in the condensed phase. Although magnetic resonance was originally demonstrated in gas phase molecular beam experiments, no application to gas phase molecular ions has yet been demonstrated. Here, we present a novel principle of NMR detection for gas phase ions based on a "magnetic resonance acceleration" technique and describe the design and construction of an apparatus which we are developing. We also present an experimental technique and some results on the formation and manipulation of cold ion packets in a strong magnetic field, which are the key innovations to detect NMR signal using the present method. We expect this novel method to lead new realm for the study of mass-selected gas-phase ions with interesting applications in both fundamental and applied sciences.

  13. Statistical evaluation of SAGE libraries: consequences for experimental design.

    PubMed

    Ruijter, Jan M; Van Kampen, Antoine H C; Baas, Frank

    2002-10-29

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags found in two SAGE libraries is hampered by the fact that each SAGE library is only one measurement: the necessary information on biological variation or experimental precision is not available. In the currently available tests, a measure of this variance is obtained from simulation or based on the properties of the tag distribution. To help the user of SAGE to decide between these tests, five different pairwise tests have been compared by determining the critical values, that is, the lowest number of tags that, given an observed number of tags in one library, needs to be found in the other library to result in a significant P value. The five tests included in this comparison are SAGE300, the tests described by Madden et al. (Oncogene 15: 1079-1085, 1997) and by Audic and Claverie (Genome Res 7: 986-995, 1997), Fisher's Exact test, and the Z test, which is equivalent to the chi-squared test. The comparison showed that, for SAGE libraries of equal as well as different size, SAGE300, Fisher's Exact test, Z test, and the Audic and Claverie test have critical values within 1.5% of each other. This indicates that these four tests will give essentially the same results when applied to SAGE libraries. The Madden test, which can only be used for libraries of similar size, is, with 25% higher critical values, more conservative, probably because the variance measure in its test statistic is not appropriate for hypothesis testing. The consequences for the choice of SAGE library sizes are discussed.

  14. The Guided Reading Procedure: An Experimental Analysis of Its Effectiveness as a Technique for Improving Reading Comprehension Skills.

    ERIC Educational Resources Information Center

    Culver, Victor Irwin

    The primary purpose of this study was to experimentally evaluate the Guided Reading Procedure (GRP) as a teaching strategy designed to improve reading comprehension. The chief experimental strategy was compared with a current instructional strategy, the Directed Reading-Thinking Activity (DRTA) described by Stauffer (1969). The effects on reading…

  15. Application of Optimization Techniques to Spectrally Modulated, Spectrally Encoded Waveform Design

    DTIC Science & Technology

    2008-09-01

    communication applications include television ( TV ), AM radio, FM radio, and early cellular telephones. Digital communication techniques differ from analog...fourth generation ( 4G ) communication systems based on Cognitive Radio (CR) and Software Defined Radio (SDR) techniques. As 4G SMSE communications...design that replaces previous trial and error methods. The research objective has been achieved in the sense that 4G communication design engineers now

  16. Experimental design for assessing the effectiveness of autonomous countermine systems

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; May, Michael; Moses, Franklin L.

    2010-04-01

    The countermine mission (CM) is a compelling example of what autonomous systems must address to reduce risks that Soldiers take routinely. The list of requirements is formidable and includes autonomous navigation, autonomous sensor scanning, platform mobility and stability, mobile manipulation, automatic target recognition (ATR), and systematic integration and control of components. This paper compares and contrasts how the CM is done today against the challenges of achieving comparable performance using autonomous systems. The Soldier sets a high standard with, for example, over 90% probability of detection (Pd) of metallic and low-metal mines and a false alarm rate (FAR) as low as 0.05/m2. In this paper, we suggest a simplification of the semi-autonomous CM by breaking it into three components: sensor head maneuver, robot navigation, and kill-chain prosecution. We also discuss the measurements required to map the system's physical and state attributes to performance specifications and note that current Army countermine metrics are insufficient to the guide the design of a semi-autonomous countermine system.

  17. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  18. Experimental design and study of Free Rotor River Turbine

    SciTech Connect

    Nepali, D.B.

    1987-01-01

    Terrace irrigation along the rivers of Nepal is the vital problem of farmers in the remote villages. The existing turbines and irrigation systems are not feasible without civil structures, and suffer from the lack of resources and financial problems. A simple and inexpensive underwater Free Rotor River Turbine (FRRT) which extracts power ranging from a fraction of a HP up to 25 HP from the velocity of the running water in a river or stream was developed. The power obtained from the turbine can be used to run a pump to lift water for drinking purposes and for irrigation along the river banks during the dry season and early part of the wet season. Various designs of models have been tested in the laboratory to find the optimum pitch angle, shape and size of blades, and optimum number of blades in order to accomplish the cheapest, simplest, and most efficient turbine. The effect of diameter of turbine, velocity of water and torque produced by the turbines were studied,and the effect of simple linear twist on blades is discussed.

  19. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  20. Modular Exhaust Design and Manufacturing Techniques for Low Cost Mid Volume Rapid Buidl to Order Systems

    DTIC Science & Technology

    2014-08-06

    technical data package will contain the following pieces of information: • Manufacturing Drawings • Code for running CNC machinery • Documentation...MODULAR EXHAUST DESIGN AND MANUFACTURING TECHNIQUES FOR LOW COST MID VOLUME RAPID BUILD TO ORDER SYSTEMS Kevin Nelson Project Engineer...customizable mufflers, as well as modular manufacturing techniques targeted at mid volume manufacturing quantities. A successful solution would reduce

  1. Aseptic colon resection by an invagination technique. Experimental study on dogs.

    PubMed

    Jørgensen, L S; Raundahl, U; Knudsen, L L; Aksglaede, K; Søgaard, P

    1991-07-01

    A new aseptic colon resection by an invagination technique is presented. The bowel to be resected is invaginated down into the healthy intestine, and the anastomosis is sutured in one layer of continuous suture before transection by a diathermy wire, placed in the intestinal lumen via the anus. Sections of bowel that cannot be invaginated, e.g., because of a tumor, are first removed by transection between pairs of cable ties, which close the lumen. Twenty dogs were operated on without receiving prophylactic antibiotics. In 10, the intestine was transected between cable ties. An imprint, taken from the anastomosis and subcutis, was cultured. The bacterial count at the anastomosis exceeded 100 in only three cases; in the subcutis, this was the case in one dog. One wound infection developed. Serial barium enemas at 1, 2, 3, and 4 weeks revealed no anastomotic leakage. One early death because of a total anastomotic dehiscence was encountered, and two dogs were killed because of wound dehiscence and anastomotic stricture, respectively. It is concluded that, in dogs, the method is easily and safely performed, but further experimental studies are needed.

  2. Experimental techniques for the characterization of carbon nanoparticles – a brief overview

    PubMed Central

    Łoś, Szymon; Kempiński, Mateusz; Markowski, Damian

    2014-01-01

    Summary The review of four experimental methods: X-ray diffraction, Raman spectroscopy, electron paramagnetic resonance and four-point electrical conductivity measurements is presented to characterize carbon nanoparticles. Two types of carbon nanoparticle systems are discussed: one comprising the powder of individual carbon nanoparticles and the second as a structurally interconnected nanoparticle matrix in the form of a fiber. X-ray diffraction and Raman spectroscopy reveal the atomic structure of the carbon nanoparticles and allow for observation of the changes in the quasi-graphitic ordering induced by ultrasonic irradiation and with the so-called quasi-high pressure effect under adsorption conditions. Structural changes have strong influence on the electronic properties, especially the localization of charge carriers within the nanoparticles, which can be observed with the EPR technique. This in turn can be well-correlated with the four-point electrical conductivity measurements which directly show the character of the charge carrier transport within the examined structures. PMID:25383287

  3. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  4. Design and experimental tests of free electron laser wire scanners

    NASA Astrophysics Data System (ADS)

    Orlandi, G. L.; Heimgartner, P.; Ischebeck, R.; Loch, C. Ozkan; Trovati, S.; Valitutti, P.; Schlott, V.; Ferianis, M.; Penco, G.

    2016-09-01

    SwissFEL is a x-rays free electron laser (FEL) driven by a 5.8 GeV linac under construction at Paul Scherrer Institut. In SwissFEL, wire scanners (WSCs) will be complementary to view-screens for emittance measurements and routinely used to monitor the transverse profile of the electron beam during FEL operations. The SwissFEL WSC is composed of an in-vacuum beam-probe—motorized by a stepper motor—and an out-vacuum pick-up of the wire signal. The mechanical stability of the WSC in-vacuum hardware has been characterized on a test bench. In particular, the motor induced vibrations of the wire have been measured and mapped for different motor speeds. Electron-beam tests of the entire WSC setup together with different wire materials have been carried out at the 250 MeV SwissFEL Injector Test Facility (SITF, Paul Scherrer Institut, CH) and at FERMI (Elettra-Sincrotrone Trieste, Italy). In particular, a comparative study of the relative measurement accuracy and the radiation-dose release of Al (99 )∶Si (1 ) and tungsten (W) wires has been carried out. On the basis of the outcome of the bench and electron-beam tests, the SwissFEL WSC can be qualified as a high resolution and machine-saving diagnostic tool in consideration of the mechanical stability of the scanning wire at the micrometer level and the choice of the wire material ensuring a drastic reduction of the radiation-dose release with respect to conventional metallic wires. The main aspects of the design, laboratory characterization and electron beam tests of the SwissFEL WSCs are presented.

  5. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  6. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers.

    PubMed

    White, M J; Nellis, G F; Kelin, S A; Zhu, W; Gianchandani, Y

    2010-11-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid.

  7. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers

    PubMed Central

    Nellis, G. F.; Kelin, S. A.; Zhu, W.; Gianchandani, Y.

    2010-01-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid. PMID:20976021

  8. Design and experimental tests of a novel neutron spin analyzer for wide angle spin echo spectrometers

    SciTech Connect

    Fouquet, Peter; Farago, Bela; Andersen, Ken H.; Bentley, Phillip M.; Pastrello, Gilles; Sutton, Iain; Thaveron, Eric; Thomas, Frederic; Moskvin, Evgeny; Pappas, Catherine

    2009-09-15

    This paper describes the design and experimental tests of a novel neutron spin analyzer optimized for wide angle spin echo spectrometers. The new design is based on nonremanent magnetic supermirrors, which are magnetized by vertical magnetic fields created by NdFeB high field permanent magnets. The solution presented here gives stable performance at moderate costs in contrast to designs invoking remanent supermirrors. In the experimental part of this paper we demonstrate that the new design performs well in terms of polarization, transmission, and that high quality neutron spin echo spectra can be measured.

  9. Question-Answering-Technique to Support Freshman and Senior Engineers in Processes of Engineering Design

    ERIC Educational Resources Information Center

    Winkelmann, Constance; Hacker, Winfried

    2010-01-01

    In two experimental studies, the influence of question-based reflection on the quality of design solutions was investigated. Students and experts with different know-how and professional experience had to design an artefact that should meet a list of requirements. Subsequently, they were asked to answer a system of interrogative questions…

  10. Some Occupational and Organizational Implications for Designing an Experimental Program in Educational Administration

    ERIC Educational Resources Information Center

    Evan, William M.

    1973-01-01

    Attempts to design an experimental program for the training of a new generation of educational administrators, with the rationale being based on selected concepts and propositions of occupational sociology, organizational theory, and systems theory. (Author)

  11. Understanding Complexity and Self-Organization in a Defense Program Management Organization (Experimental Design)

    DTIC Science & Technology

    2016-03-18

    SPONSORED REPORT SERIES Understanding Complexity and Self-Organization in a Defense Program Management Organization (Experimental Design...ID Program Manager and has had multiple operational and acquisition related tours. He is a 1995 graduate of the U.S. Naval Test Pilot School with...Organization in a Defense Program Management Organization (Experimental Design) 18 March 2016 Raymond Jones, Lecturer Graduate School of Business

  12. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  13. 78 FR 3381 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... Species: Designation of a Nonessential Experimental Population of Central Valley Spring-Run Chinook Salmon... experimental population of Central Valley spring-run Chinook salmon (Oncorhynchus tshawytscha) under section 10... Restoration Goal shall include the reintroduction of Central Valley spring-run Chinook salmon (hereafter,...

  14. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  15. Development of Observation Techniques in Reactor Vessel of Experimental Fast Reactor Joyo

    NASA Astrophysics Data System (ADS)

    Takamatsu, Misao; Imaizumi, Kazuyuki; Nagai, Akinori; Sekine, Takashi; Maeda, Yukimoto

    In-Vessel Observations (IVO) techniques for Sodium cooled Fast Reactors (SFRs) are important in confirming its safety and integrity. And several IVO equipments for an SFR are developed. However, in order to secure the reliability of IVO techniques, it was necessary to demonstrate the performance under the actual reactor environment with high temperature, high radiation dose and remained sodium. During the investigation of an incident that occurred with Joyo, IVO using a standard Video Camera (VC) and a Radiation-Resistant Fiberscope (RRF) took place at (1) the top of the Sub-Assemblies (S/As) and the In-Vessel Storage rack (IVS), (2) the bottom face of the Upper Core Structure (UCS). A simple 6 m overhead view of each S/A, through the fuel handling or inspection holes etc, was photographed using a VC for making observations of the top of S/As and IVS. About 650 photographs were required to create a composite photograph of the top of the entire S/As and IVS, and a resolution was estimated to be approximately 1mm. In order to observe the bottom face of the UCS, a Remote Handling Device (RHD) equipped with RRFs (approximately 13 m long) was specifically developed for Joyo with a tip that could be inserted into the 70 mm gap between the top of the S/As and the bottom of the UCS. A total of about 35,000 photographs were needed for the full investigation. Regarding the resolution, the sodium flow regulating grid of 0.8mm in thickness could be discriminated. The performance of IVO equipments under the actual reactor environment was successfully confirmed. And the results provided useful information on incident investigations. In addition, fundamental findings and the experience gained during this study, which included the design of equipment, operating procedures, resolution, lighting adjustments, photograph composition and the durability of the RRF under radiation exposure, provided valuable insights into further improvements and verifications for IVO techniques to

  16. Design of Optical Systems with Extended Depth of Field: An Educational Approach to Wavefront Coding Techniques

    ERIC Educational Resources Information Center

    Ferran, C.; Bosch, S.; Carnicer, A.

    2012-01-01

    A practical activity designed to introduce wavefront coding techniques as a method to extend the depth of field in optical systems is presented. The activity is suitable for advanced undergraduate students since it combines different topics in optical engineering such as optical system design, aberration theory, Fourier optics, and digital image…

  17. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    NASA Astrophysics Data System (ADS)

    Guarino, Vincenzo; Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-01

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial's manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell-biomaterial and cell- cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  18. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    SciTech Connect

    Guarino, Vincenzo Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-17

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial’s manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell–biomaterial and cell– cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  19. Shape and Surface: The challenges and advantages of 3D techniques in innovative fashion, knitwear and product design

    NASA Astrophysics Data System (ADS)

    Bendt, E.

    2016-07-01

    The presentation wants to show what kind of problems fashion and textile designers are facing in 3D-knitwear design, especially regarding fashionable flat-knit styles, and how they can use different kinds of techniques and processes to generate new types of 3D-designs and structures. To create really new things we have to overcome standard development methods and traditional thinking and should start to open our minds again for the material itself to generate new advanced textile solutions. This paper mainly introduces different results of research projects worked out in the master program “Textile Produkte” during lectures in “Innovative Product Design” and “Experimental Knitting”.

  20. Experimental design for the eXtreme Adaptive Optics Planet Imager (XAOPI)

    NASA Astrophysics Data System (ADS)

    Graham, J. R.; Macintosh, B.; Ghez, A.; Kalas, P.; Lloyd, J.; Makidon, R.; Olivier, S.; Patience, J.; Perrin, M.; Poyneer, L.; Severson, S.; Sheinis, A.; Sivaramakrishnan, A.; Troy, M.; Wallace, J.; Wilhelmsen, J.

    2002-12-01

    Direct detection of the light emitted by extra-solar planets represents the next major hurdle in the study of extra-solar planets. The NSF Center for Adaptive Optics is carrying out a design study for a dedicated ultra-high-contrast "extreme" adaptive optics (ExAO) planet imager for an 8-m class telescope. The phase space for such a system is large and trade studies are required to choose optimal values of fundamental parameters such as the telescope diameter and delivered Strehl ratio. To predict the performance of hypothetical AO systems we use models based on Kolmogorov phase screens and Fourier optics. We incorporate additional noise sources such as wavefront measurement error and time-lag errors, and distinguish between the different speckle decorrelation times of each independent error source. To compute a figure of merit for a particular AO system we need to predict the distribution of contrast and angular separation on the sky for planets. There is a large and growing of sample of precision radial velocity detected planets, which can be used to constrain the orbital elements and masses of the underlying population. When combined with the star formation history of the solar neighborhood (or ages of local, young associations), cooling curves and young planet model atmospheres this information can be used to predict how many systems can be detected with different experimental designs. We present results which allow us to evaluate the impact of different AO design choices, observing wavelengths, and target selection. Our technique also allows us to compare and quantify the selection effects associated with precision radial velocity, astrometric and direct imaging searches. This work was supported by the NSF Science and Technology Center for Adaptive Optics, managed by UC Santa Cruz under AST-9876783. Portions of this work was performed under the auspices of the U.S. Department of Energy, under contract No. W-7405-Eng-48.

  1. A smart experimental technique for the optimization of dielectric elastomer actuator (DEA) systems

    NASA Astrophysics Data System (ADS)

    Hodgins, M.; Rizzello, G.; York, A.; Naso, D.; Seelecke, S.

    2015-09-01

    In order to aid in moving dielectric elastomer actuator (DEA) technology from the laboratory into a commercial product DEA prototypes should be tested against a variety of loading conditions and eventually in the end user conditions. An experimental test setup to seamlessly perform mechanical characterization and loading of the DEA would be a great asset toward this end. Therefore, this work presents the design, control and systematic validation of a benchtop testing station for miniature silicon based circular DEAs. A versatile benchtop tester is able to characterize and apply programmable loading forces to the DEA while measuring actuator performance. The tester successfully applied mechanical loads to the DEA (including positive, constant and negative stiffness loads) simulating biasing systems via an electromagnetic linear motor operating in closed loop with a force/mechanical impedance control scheme. The tester expedites mechanical testing of the DEA by eliminating the need to build intricate pre-load mechanisms or use multiple testing jigs for characterizing the DEA response. The results show that proper mechanical loading of the DEA increases the overall electromechanical sensitivity of the system and thereby the actuator output. This approach to characterize and apply variable loading forces to DEAs in a single test system will enable faster realization of higher performance actuators.

  2. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, K. A.

    2014-12-01

    Tektites are small stones formed from rapidly cooling drops of molten rock ejected from high velocity asteroid impacts with the Earth, that freeze into a myriad of shapes during flight. Many splash-form tektites have an elongated or dumb-bell shape owing to their rotation prior to solidification[1]. Here we present a novel method for creating 'artificial tektites' from spinning drops of molten wax, using diamagnetic levitation to suspend the drops[2]. We find that the solid wax models produced this way are the stable equilibrium shapes of a spinning liquid droplet held together by surface tension. In addition to the geophysical interest in tektite formation, the stable equilibrium shapes of liquid drops have implications for many physical phenomena, covering a wide range of length scales, from nuclear physics (e.g. in studies of rapidly rotating atomic nuclei), to astrophysics (e.g. in studies of the shapes of astronomical bodies such as asteroids, rapidly rotating stars and event horizons of rotating black holes). For liquid drops bound by surface tension, analytical and numerical methods predict a series of stable equilibrium shapes with increasing angular momentum. Slowly spinning drops have an oblate-like shape. With increasing angular momentum these shapes become secularly unstable to a series of triaxial pseudo-ellipsoids that then evolve into a family of two-lobed 'dumb-bell' shapes as the angular momentum is increased still further. Our experimental method allows accurate measurements of the drops to be taken, which are useful to validate numerical models. This method has provided a means for observing tektite formation, and has additionally confirmed experimentally the stable equilibrium shapes of liquid drops, distinct from the equivalent shapes of rotating astronomical bodies. Potentially, this technique could be applied to observe the non-equilibrium dynamic processes that are also important in real tektite formation, involving, e.g. viscoelastic

  3. Simulation and Prototype Design of Variable Step Angle Techniques Based Asteroid Deflection for Future Planetary Mission

    NASA Astrophysics Data System (ADS)

    Sathiyavel, C.

    2016-07-01

    Asteroids are minor planets, especially those of the inner Solar System. The most desirable asteroids for cross the geo-synchronous orbit are the carbonaceous C-type asteroids that are deemed by the astronomy community to have a planetary protection categorization of unrestricted Earth return. The mass of near earth Asteroids (assuming spherical asteroid) as a function of its diameter varies from 2 m to 10m, the corresponding densities from 1.9/cm3 to 3.8 g/cm3. For example, a 6.5-m diameter asteroid with a density of 2.8 g/cm3 has a mass of order 4,00,000 kg. If this Asteroid falls on earth then the earth will be destroyed at when the equally of inclination angle both of earth and Asteroid. My proposed work is how we can avert this great danger for near feature the above mass of Asteroid. The present work is Simulation and Prototype Design of a Variable Step Angle Techniques Based Asteroid Deflection for future planetary Mission. Proposed method is comparing with previous method that will be very useful to achieving hit the ion velocity to asteroid surface in several direction at static position of Asteroid deviate mission[ADM].The deviate angle α1 to α2 with help of Variable step angle techniques, it is containing Stepper Motor with attach of Ion propulsion module system.VASAT module is locating the top edge on the three axis stabilized Method in ADM.The three axis stabilized method is including the devices are like Gyroscope sensor ,Arduino Microcontroller system and ion propulsion techniques. Arduino Microcontroller system determines the orientation from gyroscope sensor. Then it uses ion Propulsion techniques modules to control the required motion like pitch, yaw and roll attitude of the ADM. The exhaust thrust value is 1500 mN and velocity is 10,000 m/s [from simulation results but experimental output results is small because low quality of Components is used in research lab] .The propulsion techniques also used as a static position of ADM Mission [both

  4. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  5. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique

    PubMed Central

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)2 solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  6. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-09-08

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete.

  7. An experimental evaluation of error seeding as a program validation technique

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Ammann, P. E.

    1985-01-01

    A previously reported experiment in error seeding as a program validation technique is summarized. The experiment was designed to test the validity of three assumptions on which the alleged effectiveness of error seeding is based. Errors were seeded into 17 functionally identical but independently programmed Pascal programs in such a way as to produce 408 programs, each with one seeded error. Using mean time to failure as a metric, results indicated that it is possible to generate seeded errors that are arbitrarily but not equally difficult to locate. Examination of indigenous errors demonstrated that these are also arbitrarily difficult to locate. These two results support the assumption that seeded and indigenous errors are approximately equally difficult to locate. However, the assumption that, for each type of error, all errors are equally difficult to locate was not borne out. Finally, since a seeded error occasionally corrected an indigenous error, the assumption that errors do not interfere with each other was proven wrong. Error seeding can be made useful by taking these results into account in modifying the underlying model.

  8. Enhancements of Tow-Steering Design Techniques: Design of Rectangular Panel Under Combined Loads

    NASA Technical Reports Server (NTRS)

    Tatting, Brian F.; Setoodeh, Shahriar; Gurdal, Zafer

    2005-01-01

    An extension to existing design tools that utilize tow-steering is presented which is used to investigate the use of elastic tailoring for a flat panel with a central hole under combined loads of compression and shear. The elastic tailoring is characterized by tow-steering within individual lamina as well as a novel approach based on selective reinforcement, which attempts to minimize compliance through the use of Cellular Automata design concepts. The selective reinforcement designs lack any consideration of manufacturing constraints, so a new tow-steered path definition was developed to translate the prototype selective reinforcement designs into manufacturable plies. The minimum weight design of a flat panel under combined loading was based on a model provided by NASA-Langley personnel and analyzed by STAGS within the OLGA design environment. Baseline designs using traditional straight fiber plies were generated, as well as tow-steered designs which incorporated parallel, tow-drop, and overlap plies within the laminate. These results indicated that the overlap method provided the best improvement with regards to weight and performance as compared to traditional constant stiffness monocoque panels, though the laminates did not measure up to similar designs from the literature using sandwich and isogrid constructions. Further design studies were conducted using various numbers of the selective reinforcement plies at the core and outer surface of the laminate. None of these configurations exhibited notable advantages with regard to weight or buckling performance. This was due to the fact that the minimization of the compliance tended to direct the major stresses toward the center of the panel, which decreased the ability of the structure to withstand loads leading to instability.

  9. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  10. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  11. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  12. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  13. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  14. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  15. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  16. Exploiting Distance Technology to Foster Experimental Design as a Neglected Learning Objective in Labwork in Chemistry

    ERIC Educational Resources Information Center

    d'Ham, Cedric; de Vries, Erica; Girault, Isabelle; Marzin, Patricia

    2004-01-01

    This paper deals with the design process of a remote laboratory for labwork in chemistry. In particular, it focuses on the mutual dependency of theoretical conjectures about learning in the experimental sciences and technological opportunities in creating learning environments. The design process involves a detailed analysis of the expert task and…

  17. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  18. A Cross-Over Experimental Design for Testing Audiovisual Training Materials.

    ERIC Educational Resources Information Center

    Stolovitch, Harold D.; Bordeleau, Pierre

    This paper contains a description of the cross-over type of experimental design as well as a case study of its use in field testing audiovisual materials related to teaching handicapped children. Increased efficiency is an advantage of the cross-over design, while difficulty in selecting similar format audiovisual materials for field testing is a…

  19. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  20. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  1. Experimental Control and Threats to Internal Validity of Concurrent and Nonconcurrent Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Christ, Theodore J.

    2007-01-01

    Single-case research designs are often applied within school psychology. This article provides a critical review of the scientific merit of both concurrent and nonconcurrent multiple baseline (MB) designs, relative to their capacity to assess threats of internal validity and establish experimental control. Distinctions are established between AB…

  2. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  3. How experimental design can improve the validation process. Studies in pharmaceutical analysis.

    PubMed

    Furlanetto, S; Orlandini, S; Mura, P; Sergent, M; Pinzauti, S

    2003-11-01

    A critical discussion about the possibility of improving the method validation process by means of experimental design is presented. The reported multivariate strategies concern the evaluation of the performance parameters robustness and intermediate precision, and the optimisation of bias and repeatability. In particular, accuracy and precision improvement constitutes a special subset of experimental design in which the bias and the relative standard deviation of the assay are optimised. D-optimal design was used in order to plan experiments for this aim. The analytical methods considered were capillary electrophoresis, HPLC, adsorptive stripping voltammetry and differential pulse polarography. All methods were applied to real pharmaceutical analysis problems.

  4. Gradient vs. approximation design optimization techniques in low-dimensional convex problems

    NASA Astrophysics Data System (ADS)

    Fedorik, Filip

    2013-10-01

    Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.

  5. The User-Assisted Automated Experimental (TEST) Design Program (AED): Version II.

    DTIC Science & Technology

    1983-01-01

    ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK System Development Corporation AREA & WORK UNIT NUMBERS 4134 Linden Avenue Suite 305 62202F, 7184-00-09...pro- cedures and which maximize information return while minimizing the number of observations (tests) required. The overall experimental design...E. Taylor, SDC Colorado Springs, CO, for his work on the Central Composite Design, Mr. Edwin G. Meyer who developed many of the algorithms and

  6. Experimental techniques for measuring Rayleigh-Taylor instability in inertial confinement fusion (ICF)

    SciTech Connect

    Smalyuk, V A

    2012-06-07

    Rayleigh-Taylor (RT) instability is one of the major concerns in inertial confinement fusion (ICF) because it amplifies target modulations in both acceleration and deceleration phases of implosion, which leads to shell disruption and performance degradation of imploding targets. This article reviews experimental results of the RT growth experiments performed on OMEGA laser system, where targets were driven directly with laser light. RT instability was studied in the linear and nonlinear regimes. The experiments were performed in acceleration phase, using planar and spherical targets, and in deceleration phase of spherical implosions, using spherical shells. Initial target modulations consisted of 2-D pre-imposed modulations, and 2-D and 3-D modulations imprinted on targets by the non-uniformities in laser drive. In planar geometry, the nonlinear regime was studied using 3-D modulations with broadband spectra near nonlinear saturation levels. In acceleration-phase, the measured modulation Fourier spectra and nonlinear growth velocities are in good agreement with those predicted by Haan's model [Haan S W 1989 Phys. Rev. A 39 5812]. In a real-space analysis, the bubble merger was quantified by a self-similar evolution of bubble size distributions [Oron D et al 2001 Phys. Plasmas 8, 2883]. The 3-D, inner-surface modulations were measured to grow throughout the deceleration phase of spherical implosions. RT growth rates are very sensitive to the drive conditions, therefore they can be used to test and validate drive physics in hydrodynamic codes used to design ICF implosions. Measured growth rates of pre-imposed 2-D target modulations below nonlinear saturation levels were used to validate non-local thermal electron transport model in laser-driven experiments.

  7. Experimental verification of subcooled flow boiling for tokamak pump limiter designs

    SciTech Connect

    Koski, J.A.; Beattie, A.G.; Whitley, J.B.; Croessmann, C.D.

    1987-01-01

    In fusion energy research devices such as tokamaks, limiters are used to define the plasma boundary, and may serve the additional functions of plasma density and impurity control by removing neutralized particles from the plasma edge region. Because the devices must operate in the plasma edge or ''scrape-off-layer,'' they are subject to high heat fluxes. In this paper, experimental studies for a pump limiter design currently under development are discussed. Subcooled flow boiling of water and twisted tape flow enhancement are combined to enable heat removal of highly peaked local heat fluxes at the tube-water boundary in the 40 to 50 MW/m/sup 2/ range. Experiments were conducted with the use of a rastered 30 kV electron beam apparatus which is capable of producing the desired steady state heat flux levels. Objectives of the experiment were (1) to verify the heat removal model used for finite element thermal and stress analyses, (2) selection of appropriate critical heat flux (CHF) margins and criteria, and (3) development of acoustic techniques to monitor the onset of CHF during actual limiter operation.

  8. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λ(em)) of 358 nm after excitation (λ(ex)) at 301 nm. The Taguchi orthogonal array L9 (3(4)) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques.

  9. Single-Case Experimental Designs to Evaluate Novel Technology-Based Health Interventions

    PubMed Central

    Cassidy, Rachel N; Raiff, Bethany R

    2013-01-01

    Technology-based interventions to promote health are expanding rapidly. Assessing the preliminary efficacy of these interventions can be achieved by employing single-case experiments (sometimes referred to as n-of-1 studies). Although single-case experiments are often misunderstood, they offer excellent solutions to address the challenges associated with testing new technology-based interventions. This paper provides an introduction to single-case techniques and highlights advances in developing and evaluating single-case experiments, which help ensure that treatment outcomes are reliable, replicable, and generalizable. These advances include quality control standards, heuristics to guide visual analysis of time-series data, effect size calculations, and statistical analyses. They also include experimental designs to isolate the active elements in a treatment package and to assess the mechanisms of behavior change. The paper concludes with a discussion of issues related to the generality of findings derived from single-case research and how generality can be established through replication and through analysis of behavioral mechanisms. PMID:23399668

  10. All optical experimental design for neuron excitation, inhibition, and action potential detection

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Tolstykh, Gleb; Martens, Stacey; Sedelnikova, Anna; Ibey, Bennett L.; Beier, Hope T.

    2016-03-01

    Recently, infrared light has been shown to both stimulate and inhibit excitatory cells. However, studies of infrared light for excitatory cell inhibition have been constrained by the use of invasive and cumbersome electrodes for cell excitation and action potential recording. Here, we present an all optical experimental design for neuronal excitation, inhibition, and action potential detection. Primary rat neurons were transfected with plasmids containing the light sensitive ion channel CheRiff. CheRiff has a peak excitation around 450 nm, allowing excitation of transfected neurons with pulsed blue light. Additionally, primary neurons were transfected with QuasAr2, a fast and sensitive fluorescent voltage indicator. QuasAr2 is excited with yellow or red light and therefore does not spectrally overlap CheRiff, enabling imaging and action potential activation, simultaneously. Using an optic fiber, neurons were exposed to blue light sequentially to generate controlled action potentials. A second optic fiber delivered a single pulse of 1869nm light to the neuron causing inhibition of the evoked action potentials (by the blue light). When used in concert, these optical techniques enable electrode free neuron excitation, inhibition, and action potential recording, allowing research into neuronal behaviors with high spatial fidelity.

  11. Rotor burst protection program: Experimentation to provide guidelines for the design of turbine rotor burst fragment containment rings

    NASA Technical Reports Server (NTRS)

    Mangano, G. J.; Salvino, J. T.; Delucia, R. A.

    1977-01-01

    Empirical guidelines for the design of minimum weight turbine rotor disk fragment containment rings made from a monolithic metal were generated by experimentally establishing the relationship between a variable that provides a measure of containment ring capability and several other variables that both characterized the configurational aspects of the rotor fragments and containment ring, and had been found from exploratory testing to have had significant influence on the containment process. Test methodology and data analysis techniques are described. Results are presented in graphs and tables.

  12. An Automated Tool for Developing Experimental Designs: The Computer-Aided Design Reference for Experiments (CADRE)

    DTIC Science & Technology

    2009-01-01

    survey procedures, and cognitive task analysis), system design methods (e.g., focus groups , design guidelines, specifications, and requirements), and...LABORATORY - HRED ATTN AMSRD ARL HR MZ A DAVISON 199 E 4TH ST STE C TECH PARK BLG 2 FT LEONARD WOOD MO 65473-1949 1 ARMY RSCH LABORATORY

  13. Integrated flight/propulsion control design for a STOVL aircraft using H-infinity control design techniques

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.

    1991-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.

  14. Designing for Damage: Robust Flight Control Design using Sliding Mode Techniques

    NASA Technical Reports Server (NTRS)

    Vetter, T. K.; Wells, S. R.; Hess, Ronald A.; Bacon, Barton (Technical Monitor); Davidson, John (Technical Monitor)

    2002-01-01

    A brief review of sliding model control is undertaken, with particular emphasis upon the effects of neglected parasitic dynamics. Sliding model control design is interpreted in the frequency domain. The inclusion of asymptotic observers and control 'hedging' is shown to reduce the effects of neglected parasitic dynamics. An investigation into the application of observer-based sliding mode control to the robust longitudinal control of a highly unstable is described. The sliding mode controller is shown to exhibit stability and performance robustness superior to that of a classical loop-shaped design when significant changes in vehicle and actuator dynamics are employed to model airframe damage.

  15. Experimental determination of Grunieisen gamma for two dissimilar materials (PEEK and Al 5083) via the shock-reverberation technique

    NASA Astrophysics Data System (ADS)

    Roberts, Andrew; Appleby-Thomas, Gareth; Hazell, Paul

    2011-06-01

    Following multiple loading events the resultant shock state of a material will lie away from the principle Hugoniot. Prediction of such states requires knowledge of a materials equation-of-state. The material-specific variable Grunieisen gamma (Γ) defines the shape of ``off-Hugoniot'' points in energy-volume-pressure space. Experimentally the shock-reverberation technique (based on the principle of impedance-matching) has previously allowed estimation of the first-order Grunieisen gamma term (Γ1) for a silicone elastomer. Here, this approach was employed to calculate Γ1 for two dissimilar materials, Polyether ether ketone (PEEK) and the armour-grade aluminium alloy 5083 (H32); thereby allowing discussion of limitations of this technique in the context of plate-impact experiments employing Manganin stress gauges. Finally, the experimentally determined values for Γ1 were further refined by comparison between experimental records and numerical simulations carried out using the commercial code ANYSYS Autodyn®.

  16. Adhesive measurements of polymer bonded explosive constituents using the JKR experimental technique with a viscoelastic contact description

    NASA Astrophysics Data System (ADS)

    Hamilton, N. R.; Williamson, D. M.; Lewis, D.; Glauser, A.; Jardine, A. P.

    2017-01-01

    It has been shown experimentally that under many circumstances the strength limiting factor of Polymer Bonded Explosives (PBXs) is the adhesion which exists between the filler crystals and the polymer matrix. Experimental measurements of the Work of Adhesion between different binders and glass have been conducted using the JKR experimental technique, a reversible axisymmetric fracture experiment, during which the area of contact and the applied force are both measured during loading and unloading of the interface. The data taken with this technique show a rate dependence not present in the analytical JKR theory which is normally used to describe the adhesive contact of two elastic bodies, and which arises from the viscoelastic properties of the bulk polymer. The data is intended to inform the development, and validate the predictions of, microstructural models of PBX deformation and failure.

  17. Systematic design of output filters for audio class-D amplifiers via Simplified Real Frequency Technique

    NASA Astrophysics Data System (ADS)

    Hintzen, E.; Vennemann, T.; Mathis, W.

    2014-11-01

    In this paper a new filter design concept is proposed and implemented which takes into account the complex loudspeaker impedance. By means of techniques of broadband matching, that has been successfully applied in radio technology, we are able to optimize the reconstruction filter to achieve an overall linear frequency response. Here, a passive filter network is inserted between source and load that matches the complex load impedance to the complex source impedance within a desired frequency range. The design and calculation of the filter is usually done using numerical approximation methods which are known as Real Frequency Techniques (RFT). A first approach to systematic design of reconstruction filters for class-D amplifiers is proposed, using the Simplified Real Frequency Technique (SRFT). Some fundamental considerations are introduced as well as the benefits and challenges of impedance matching between class-D amplifiers and loudspeakers. Current simulation data using MATLAB is presented and supports some first conclusions.

  18. Study on the Ring Type Stator Design Technique for a Traveling Wave Rotary Type Ultrasonic Motor

    NASA Astrophysics Data System (ADS)

    Oh, Jin-Heon; Yuk, Hyung-Sang; Lim, Kee-Joe

    2012-09-01

    In this paper, the technique of design for the stator of traveling wave rotary type ultrasonic motor was proposed. To establish the design technique, the distribution of internal stresses of the stator was analyzed by applying the cylindrical bodies contact model of Hertz theory and the concept of “horn effect” was used to consider the influence of the projection structure. To verify the proposed technique, the prototype motor was fabricated on the authority of the projection shape dimension and the design specification. And its performance was evaluated. According to the estimate production of the experiment results using the extrapolation, we confirmed that the values obtained through the verification experiment were similar to those deduced by the proposed method properly.

  19. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  20. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  1. Experimental hydrogen-fueled automotive engine design data-base project. Volume 2. Main technical report

    SciTech Connect

    Swain, M.R.; Adt, R.R. Jr.; Pappas, J.M.

    1983-05-01

    Operational performance and emissions characteristics of hydrogen-fueled engines are reviewed. The project activities are reviewed including descriptions of the test engine and its components, the test apparatus, experimental techniques, experiments performed and the results obtained. Analyses of other hydrogen engine project data are also presented and compared with the results of the present effort.

  2. Experimental techniques for determination of the role of diffusion and convection in crystal growth from solution

    NASA Technical Reports Server (NTRS)

    Zefiro, L.

    1980-01-01

    Various studies of the concentration of the solution around a growing crystal using interferometric techniques are reviewed. A holographic interferometric technique used in laboratory experiments shows that a simple description of the solution based on the assumption of a purely diffusive mechanism appears inadequate since the convection, effective even in reduced columns, always affects the growth.

  3. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study.

    PubMed

    Hashim, H A; Abido, M A

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.

  4. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study

    PubMed Central

    Hashim, H. A.; Abido, M. A.

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738

  5. Time-domain technique for optimal design of digital-filter equalizers.

    NASA Technical Reports Server (NTRS)

    Burlage, D. W.; Houts, R. C.

    1972-01-01

    A technique is presented for the design of frequency-sampling and transversal digital filters from specified unit-impulse responses. The multiplier coefficients for the digital filter are specified by the use of a linear-programming algorithm. Examples include the design of digital filters to generate intersymbol-free pulses for data transmission over ideal bandlimited channels and to equalize data transmission channels that have known unit-impulse responses.

  6. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J.; Kaul, R. D.; Wallace, R. G.

    1983-01-01

    This Propagation Handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in some detail, in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. In order to make the Handbook readily usable to many engineers, it has been arranged in two parts. Chapters 2-5 comprise the descriptive part. They deal in some detail with rain systems, rain and attenuation models, depolarization and experimental data. Chapters 6 and 7 make up the design part of the Handbook and may be used almost independently of the earlier chapters. In Chapter 6, the design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. Chapter 7 addresses the questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results.

  7. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  8. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines.

  9. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  10. Statistical evaluation and experimental design of a psoriasis xenograft transplantation model treated with cyclosporin A.

    PubMed

    Stenderup, Karin; Rosada, Cecilia; Alifrangis, Lene; Andersen, Søren; Dam, Tomas Norman

    2011-05-01

    Psoriasis xenograft transplantation models where human skin is transplanted onto immune-deficient mice are generally accepted in psoriasis research. Over the last decade, they have been widely employed to screen for new therapeutics with a potential anti-psoriatic effect. However, experimental designs differ in several parameters. Especially, the number of donors and grafts per experimental design varies greatly; numbers that are directly related to the probability of detecting statistically significant drug effects. In this study, we performed a statistical evaluation of the effect of cyclosporine A, a recognized anti-psoriatic drug, to generate a statistical model employable to simulate different scenarios of experimental designs and to calculate the associated statistical study power, defined as the probability of detecting a statistically significant anti-psoriatic drug treatment effect. Results showed that to achieve a study power of 0.8, at least 20 grafts per treatment group and a minimum of five donors should be included in the chosen experimental setting. To our knowledge, this is the first time that study power calculations have been performed to evaluate treatment effects in a psoriasis xenograft transplantation model. This study was based on a defined experimental protocol, thus other parameters such as drug potency, treatment protocol, mouse strain and graft size should, also, be taken into account when designing an experiment. We propose that the results obtained in this study may lend a more quantitative support to the validity of results obtained when exploring new potential anti-psoriatic drug effects.

  11. Identification of Tools and Techniques to Enhance Interdisciplinary Collaboration During Design and Construction Projects.

    PubMed

    Keys, Yolanda; Silverman, Susan R; Evans, Jennie

    2016-01-01

    The purpose of this study was to collect the perceptions of design professionals and clinicians regarding design process success strategies and elements of interprofessional engagement and communication during healthcare design and construction projects. Additional objectives were to gather best practices to maximize clinician engagement and provide tools and techniques to improve interdisciplinary collaboration for future projects. Strategies are needed to enhance the design and construction process and create interactions that benefit not only the project but the individuals working to see its completion. Meaningful interprofessional collaboration is essential to any healthcare design project and making sure the various players communicate is a critical element. This was a qualitative study conducted via an online survey. Respondents included architects, construction managers, interior designers, and healthcare personnel who had recently been involved in a building renovation or new construction project for a healthcare facility. Responses to open-ended questions were analyzed for themes, and descriptive statistics were used to provide insight into participant demographics. Information on the impressions, perceptions, and opportunities related to clinician involvement in design projects was collected from nurses, architects, interior designers, and construction managers. Qualitative analysis revealed themes of clinician input, organizational dynamics, and a variety of communication strategies to be the most frequently mentioned elements of successful interprofessional collaboration. This study validates the need to include clinician input in the design process, to consider the importance of organizational dynamics on design team functioning, and to incorporate effective communication strategies during design and construction projects.

  12. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  13. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  14. Experimental techniques for the characterization and development of thermal barrier coating bond coat alloys

    NASA Astrophysics Data System (ADS)

    Thompson, Robert J.

    Thermal barrier coatings, commonly used in modern gas turbines and jet engines, are dynamic, multilayered structures consisting of a superalloy substrate, an Al-rich bond coat, a thermally grown oxide, and a ceramic top coat. Knowledge of the disparate material properties for each of the constituents of a thermal barrier coating is crucial to both better understanding and improving the performance of the system. The efforts of this dissertation quantify fundamental aspects of two intrinsic strain mechanisms that arise during thermal cycling. This includes measurement of the thermal expansion behavior for bond coats and superalloys as well as establishing specific ternary compositions associated with a strain-inducing martensitic phase transformation, which is known to occur in Ni-rich bond coat alloys. In order to quantify the coefficient of thermal expansion for a number of actual alloys extracted from contemporary thermal barrier coating systems, this work employs a noncontact high temperature digital image correlation technique to nearly 1100°C. The examined materials include: two commercial superalloys, two as-deposited commercial bond coat alloys, and three experimental bond coat alloys. The as-deposited specimens were created using a diffusion aluminizing and a low pressure plasma spray procedure to thicknesses on the order of 50 and 100 mum, respectively. For the plasma sprayed bond coat, a comparison with a bulk counterpart of identical composition indicated that deposition procedures have little effect on thermal expansion. An analytical model of oxide rumpling is used to show that the importance of thermal expansion mismatch between a commercial bond coat and its superalloy substrate is relatively small. Considerably higher expansion values are noted for a Ni-rich bond coat alloy, however, and modeling which includes this layer suggests that it may have a substantial influence on rumpling. Combinatorial methods based on diffusion multiples are also

  15. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  16. Experimental Assessment of Double Langmuir Probe Analysis Techniques in a Hall Thruster Plume

    DTIC Science & Technology

    2012-07-25

    magnitude higher than the extended far-field plume. Langmuir probes, an electrostatic diagnostic developed by Irving Langmuir in 1924 [5], are widely used...of Double Langmuir Probe Analysis Techniques in a Hall 5b. GRANT NUMBER Thruster Plume 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Brown, D.L...recent theoretical study of double Langmuir probes led to development of improved analytical techniques that account for probe electrode sheath

  17. Assessment of the Design Efficacy of a Preschool Vocabulary Instruction Technique

    ERIC Educational Resources Information Center

    Roskos, Kathleen; Burstein, Karen

    2011-01-01

    Broad-stroke approaches to vocabulary teaching in preschool include effective instructional elements, yet may be too ill-structured to affect the vocabulary learning of children experiencing serious delays. Using a formative research approach, this study examines the design potential of a supplemental vocabulary instruction technique that…

  18. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    ERIC Educational Resources Information Center

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  19. The Ticket to Retention: A Classroom Assessment Technique Designed to Improve Student Learning

    ERIC Educational Resources Information Center

    Divoll, Kent A.; Browning, Sandra T.; Vesey, Winona M.

    2012-01-01

    Classroom assessment techniques (CATs) or other closure activities are widely promoted for use in college classrooms. However, research on whether CATs improve student learning are mixed. The authors posit that the results are mixed because CATs were designed to "help teachers find out what students are learning in the classroom and how well…

  20. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    PubMed

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.