Science.gov

Sample records for taguchi experimental design

  1. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  2. Spacecraft design optimization using Taguchi analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1991-01-01

    The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.

  3. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λ(em)) of 358 nm after excitation (λ(ex)) at 301 nm. The Taguchi orthogonal array L9 (3(4)) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques.

  4. Microcosm assays and Taguchi experimental design for treatment of oil sludge containing high concentration of hydrocarbons.

    PubMed

    Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P

    2009-12-01

    Microcosm assays and Taguchi experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A Taguchi L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source.

  5. Parametric Appraisal of Process Parameters for Adhesion of Plasma Sprayed Nanostructured YSZ Coatings Using Taguchi Experimental Design

    PubMed Central

    Mantry, Sisir; Mishra, Barada K.; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones. PMID:24288490

  6. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-06-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  7. Bioslurry phase remediation of chlorpyrifos contaminated soil: process evaluation and optimization by Taguchi design of experimental (DOE) methodology.

    PubMed

    Venkata Mohan, S; Sirisha, K; Sreenivasa Rao, R; Sarma, P N

    2007-10-01

    Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence of eight biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature, soil microflora load, application of bioaugmentation and humic substance concentration) on the soil bound chlorpyrifos bioremediation in bioslurry phase reactor. The selected eight factors were considered at three levels (18 experiments) in the experimental design. Substrate-loading rate showed significant influence on the bioremediation process among the selected factors. Derived optimum operating conditions obtained by the methodology showed enhanced chlorpyrifos degradation from 1479.99 to 2458.33microg/g (over all 39.82% enhancement). The proposed method facilitated systematic mathematical approach to understand the complex bioremediation process and the optimization of near optimum design parameters, only with a few well-defined experimental sets.

  8. Vertically aligned N-doped CNTs growth using Taguchi experimental design

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.

    2015-07-01

    The Taguchi method with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.

  9. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  10. Removal of Bisphenol A aqueous solution using surfactant-modified natural zeolite: Taguchi's experimental design, adsorption kinetic, equilibrium and thermodynamic study.

    PubMed

    Genç, Nevim; Kılıçoğlu, Ödül; Narci, Ali Oğuzhan

    2017-02-01

    In this study, surfactant-modified natural zeolite was used to remove Bisphenol A (BPA) from aqueous solutions. Kinetics, equilibrium and thermodynamics of BPA adsorption on the adsorbent surfaces were investigated. The experimental data were described with the Temkin isotherm and the pseudo-second- order kinetic model. Taguchi's robust design approach was used to optimize adsorption of BPA. Experimentation was planned as per Taguchi's L27 orthogonal array. Tests were conducted with different adsorbate amount, pH, time, initial concentration of BPA, temperature and agitation speed. The optimum levels of control factors for maximum total organic carbon removal were defined (adsorbate amount at 0.25 g, pH at 7, time at 30 min, initial concentration of BPA at 50 mg/L, temperature at 30°C and agitation speed at 200 rpm). The ANOVA analysis shown that the most effective control factor is adsorbent dosage; its contribution is 56.4%. Contribution of pH and mixing rate are 7.5% and 7.6%, respectively. A confirmation experiment was conducted to verify the feasibility and effectiveness of the optimal combination. The observed value of S/N (ηobs = 39) ratio is compared with that of the predicted value (ηopt = 48). The prediction error, that is, ηopt - ηobs = 9, is within CI value.

  11. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  12. Effect of Additives on Green Sand Molding Properties using Design of Experiments and Taguchi's Quality Loss Function - An Experimental Study

    NASA Astrophysics Data System (ADS)

    Desai, Bhagyashree; Mokashi, Pavani; Anand, R. L.; Burli, S. B.; Khandal, S. V.

    2016-09-01

    The experimental study aims to underseek the effect of various additives on the green sand molding properties as a particular combination of additives could yield desired sand properties. The input parameters (factors) selected were water and powder (Fly ash, Coconut shell and Tamarind) in three levels. Experiments were planned using design of experiments (DOE). On the basis of plans, experiments were conducted to understand the behavior of sand mould properties such as compression strength, shear strength, permeability number with various additives. From the experimental results it could be concluded that the factors have significant effect on the sand properties as P-value found to be less than 0.05 for all the cases studied. The optimization based on quality loss function was also performed. The study revealed that the quality loss associated with the tamarind powder was lesser compared to other additives selected for the study. The optimization based on quality loss function and the parametric analysis using ANOVA suggested that the tamarind powder of 8 gm per Kg of molding sand and moisture content of 7% yield better properties to obtain sound castings.

  13. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    PubMed Central

    Asghar, Anam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications. PMID:25258741

  14. A comparison of central composite design and Taguchi method for optimizing Fenton process.

    PubMed

    Asghar, Anam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye:Fe(+2), H2O2:Fe(+2), and pH were identified control variables while COD and decolorization efficiency were selected responses. L 9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications.

  15. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  16. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  17. Which is better for optimizing the biosorption process of lead - central composite design or the Taguchi technique?

    PubMed

    Azari, Ali; Mesdaghinia, Alireza; Ghanizadeh, Ghader; Masoumbeigi, Hossein; Pirsaheb, Meghdad; Ghafari, Hamid Reza; Khosravi, Touba; Sharafi, Kiomars

    2016-09-01

    The aim of this study is to evaluate central composite design (CCD) and the Taguchi technique in the adsorption process. Contact time, initial concentration, and pH were selected as the variables, and the removal efficiency of Pb was chosen for the designated response. In addition, face-centered CCD and the L9 orthogonal array were used for the experimental design. The result indicated that, at optimum conditions, the removal efficiency of Pb was 80%. However, the value of R(2) was greater than 0.95 for both the CCD and Taguchi techniques, which revealed that both techniques were suitable and in conformity with each other. Moreover, the results of analysis of variance and Prob > F < 0.05 showed the appropriate fit of the designated model with the experimental results. The probability of classifying the contributing variables by giving a percentage of the response quantity (Pb removal) made the Taguchi model an appropriate method for examining the effectiveness of different factors. pH was evaluated as the best input factor as it contributed 66.2% of Pb removal. The Taguchi technique was additionally confirmed by three-dimensional contour plots of CCD. Consequently, the Taguchi method with nine experimental runs and easy interaction plots is an appropriate substitute for CCD for several chemical engineering functions.

  18. Using Taguchi robust design method to develop an optimized synthesis procedure of nanocrystalline cancrinite

    NASA Astrophysics Data System (ADS)

    Azizi, Seyed Naser; Asemi, Neda; Samadi-Maybodi, Abdolrouf

    2012-09-01

    In this study, perlite was used as a low-cost source of Si and Al to synthesis of nanocrystalline cancrinite zeolite. The synthesis of cancrinite zeolite from perlite by using the alkaline hydrothermal treatment under saturated steam pressure was investigated. A statistical Taguchi design of experiments was employed to evaluate the effects of the process variables such as type of aging, aging time and hydrothermal crystallization time on the crystallnity of synthesized zeolite. The optimum conditions for maximum crystallinity of nanocrystalline cancrinite were obtained as microwave-assisted aging, 60 min aging time and 6 h hydrothermal crystallization time from statistical analysis of the experimental results using Taguchi design. The synthetic samples were characterization by XRD, FT-IR and FE-SEM techniques. The results showed that the microwave-assisted aging can shorten the crystallization time and reduced the crystal size to form nanocrystalline cancrinite zeolite.

  19. Taguchi design-based optimization of sandwich immunoassay microarrays for detecting breast cancer biomarkers.

    PubMed

    Luo, Wen; Pla-Roca, Mateu; Juncker, David

    2011-07-15

    Taguchi design, a statistics-based design of experiment method, is widely used for optimization of products and complex production processes in many different industries. However, its use for antibody microarray optimization has remained underappreciated. Here, we provide a brief explanation of Taguchi design and present its use for the optimization of antibody sandwich immunoassay microarray with five breast cancer biomarkers: CA15-3, CEA, HER2, MMP9, and uPA. Two successive optimization rounds with each 16 experimental trials were performed. We tested three factors (capture antibody, detection antibody, and analyte) at four different levels (concentrations) in the first round and seven factors (including buffer solution, streptavidin-Cy5 dye conjugate concentration, and incubation times for five assay steps) with two levels each in the second round; five two-factor interactions between selected pairs of factors were also tested. The optimal levels for each factor as measured by net assay signal increase were determined graphically, and the significance of each factor was analyzed statistically. The concentration of capture antibody, streptavidin-Cy5, and buffer composition were identified as the most significant factors for all assays; analyte incubation time and detection antibody concentration were significant only for MMP9 and CA15-3, respectively. Interactions between pairs of factors were identified, but were less influential compared with single factor effects. After Taguchi optimization, the assay sensitivity was improved between 7 and 68 times, depending on the analyte, reaching 640 fg/mL for uPA, and the maximal signal intensity increased between 1.8 and 3 times. These results suggest that Taguchi design is an efficient and useful approach for the rapid optimization of antibody microarrays.

  20. Fabrication and optimization of camptothecin loaded Eudragit S 100 nanoparticles by Taguchi L4 orthogonal array design

    PubMed Central

    Mahalingam, Manikandan; Krishnamoorthy, Kannan

    2015-01-01

    Introduction: The objective of this investigation was to design and optimize the experimental conditions for the fabrication of camptothecin (CPT) loaded Eudragit S 100. Nanoparticles, and to understand the effect of various process parameters on the average particles size, particle size uniformity and surface area of the prepared polymeric nanoparticles using Taguchi design. Materials and Methods: CPT loaded Eudragit S 100 nanoparticles were prepared by nanoprecipitation method and characterized by particles size analyzer. Taguchi orthogonal array design was implemented to study the influence of seven independent variables on three dependent variables. Eight experimental trials involving seven independent variables at higher and lower levels were generated by design expert. Results: Factorial design result has shown that (a) except, β-cyclodextrin concentration all other parameters do not significantly influenced the average particle size (R1); (b) except, sonication duration and aqueous phase volume, all other process parameters significantly influence the particle size uniformity; (c) all the process parameters does not significantly influence the surface area. Conclusion: The R1, particle size uniformity and surface area of the prepared drug-loaded polymeric nanoparticles were found to be 120 nm, 0.237 and 55.7 m2 /g and the results were good correlated with the data generated by the Taguchi design method. PMID:26258056

  1. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  2. Formulation Development and Evaluation of Hybrid Nanocarrier for Cancer Therapy: Taguchi Orthogonal Array Based Design

    PubMed Central

    Tekade, Rakesh K.; Chougule, Mahavir B.

    2013-01-01

    Taguchi orthogonal array design is a statistical approach that helps to overcome limitations associated with time consuming full factorial experimental design. In this study, the Taguchi orthogonal array design was applied to establish the optimum conditions for bovine serum albumin (BSA) nanocarrier (ANC) preparation. Taguchi method with L9 type of robust orthogonal array design was adopted to optimize the experimental conditions. Three key dependent factors namely, BSA concentration (% w/v), volume of BSA solution to total ethanol ratio (v : v), and concentration of diluted ethanolic aqueous solution (% v/v), were studied at three levels 3%, 4%, and 5% w/v; 1 : 0.75, 1 : 0.90, and 1 : 1.05 v/v; 40%, 70%, and 100% v/v, respectively. The ethanolic aqueous solution was used to impart less harsh condition for desolvation and attain controlled nanoparticle formation. The interaction plot studies inferred the ethanolic aqueous solution concentration to be the most influential parameter that affects the particle size of nanoformulation. This method (BSA, 4% w/v; volume of BSA solution to total ethanol ratio, 1 : 0.90 v/v; concentration of diluted ethanolic solution, 70% v/v) was able to successfully develop Gemcitabine (G) loaded modified albumin nanocarrier (M-ANC-G) of size 25.07 ± 2.81 nm (ζ = −23.03 ± 1.015 mV) as against to 78.01 ± 4.99 nm (ζ = −24.88 ± 1.37 mV) using conventional method albumin nanocarrier (C-ANC-G). Hybrid nanocarriers were generated by chitosan layering (solvent gelation technique) of respective ANC to form C-HNC-G and M-HNC-G of sizes 125.29 ± 5.62 nm (ζ = 12.01 ± 0.51 mV) and 46.28 ± 2.21 nm (ζ = 15.05 ± 0.39 mV), respectively. Zeta potential, entrapment, in vitro release, and pH-based stability studies were investigated and influence of formulation parameters are discussed. Cell-line-based cytotoxicity assay (A549 and H460 cells) and cell internalization assay (H460 cell line) were

  3. Taguchi statistical design and analysis of cleaning methods for spacecraft materials

    NASA Technical Reports Server (NTRS)

    Lin, Y.; Chung, S.; Kazarians, G. A.; Blosiu, J. O.; Beaudet, R. A.; Quigley, M. S.; Kern, R. G.

    2003-01-01

    In this study, we have extensively tested various cleaning protocols. The variant parameters included the type and concentration of solvent, type of wipe, pretreatment conditions, and various rinsing systems. Taguchi statistical method was used to design and evaluate various cleaning conditions on ten common spacecraft materials.

  4. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method.

    PubMed

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-06-13

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg(-1) Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis.

  5. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    PubMed Central

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg−1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  6. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    NASA Astrophysics Data System (ADS)

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-06-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg‑1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis.

  7. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  8. Economic design of bar X & S control charts based on Taguchi's loss function and its optimization

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Yang, Wen'an; Liao, Wenhe; Gao, Shiwen

    2012-05-01

    Much research effort has been devoted to economic design of bar X & S control charts, however, there are some problems in usual methods. On the one hand, it is difficult to estimate the relationship between costs and other model parameters, so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur; on the other hand, in many cases, only one type of process shift or only one pair of process shifts are taken into consideration, which may not correctly reflect the actual process conditions. To improve the behavior of economic design of control chart, a cost & loss model with Taguchi's loss function for the economic design of bar X & S control charts is embellished, which is regarded as an optimization problem with multiple statistical constraints. The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts, thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts. At the same time, an improved particle swarm optimization (IPSO) is developed to solve such an optimization problem in design of bar X & S control charts, IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics. Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts. The proposed method can substantially reduce the total cost (or loss) of the control charts, and it will be a promising tool for economic design of control charts.

  9. Application of Taguchi robust design method to SAW mass sensing device.

    PubMed

    Wu, Der Ho; Chen, Hsin Hua

    2005-12-01

    It is essential that measurement systems provide an accurate and robust performance over a wide range of input conditions. This paper adopts Taguchi's signal-to-noise ratio (SNR) analysis to develop a robust design for the Rayleigh surface acoustic wave (SAW) gas sensing device operated in a conventional delay-line configuration. The goal of the present Taguchi design activity is to increase the sensitivity of this sensor while simultaneously reducing its variability. A time- and cost-efficient finite-element analysis method is used to investigate the effects on the sensor's response output of variations in the carbon dioxide (CO2) gas deposited mass. The simulation results for the resonant frequency and wave mode analysis are all shown to be in good agreement with the values predicted theoretically.

  10. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  11. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4

    PubMed Central

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions. PMID:24250648

  12. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4.

    PubMed

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.

  13. Mixed matrix membrane application for olive oil wastewater treatment: process optimization based on Taguchi design method.

    PubMed

    Zirehpour, Alireza; Rahimpour, Ahmad; Jahanshahi, Mohsen; Peyravi, Majid

    2014-01-01

    Olive oil mill wastewater (OMW) is a concentrated effluent with a high organic load. It has high levels of organic chemical oxygen demand (COD) and phenolic compounds. This study presents a unique process to treat OMW. The process uses ultrafiltration (UF) membranes modified by a functionalized multi wall carbon nano-tube (F-MWCNT). The modified tube has an inner diameter of 15-30 nm and is added to the OMW treatment process to improve performance of the membrane. Tests were done to evaluate the following operating parameters of the UF system; pressure, pH and temperature; also evaluated parameters of permeate flux, flux decline, COD removal and total phenol rejection. The Taguchi robust design method was applied for an optimization evaluation of the experiments. Variance (ANOVA) analysis was used to determine the most significant parameters affecting permeate flux, flux decline, COD removal and total phenols rejection. Results demonstrated coagulation and pH as the most important factors affecting permeate flux of the UF. Moreover, pH and F-MWCNT UF had significant positive effects on flux decline, COD removal and total phenols rejection. Based on the optimum conditions determined by the Taguchi method, evaluations for permeate flux tests; flux decline, COD removal and total phenols rejection were about 21.2 (kg/m(2) h), 12.6%, 72.6% and 89.5%, respectively. These results were in good agreement with those predicted by the Taguchi method (i.e.; 22.8 (kg/m(2) h), 11.9%, 75.8 and 94.7%, respectively). Mechanical performance of the membrane and its application for high organic wastewater treatment were determined as strong.

  14. Statistical Design of MOS VLSI (Very Large Scale Integrated) Circuits with Designed Experiments

    DTIC Science & Technology

    1990-03-01

    prohibitively large number of experimental runs, however. The Taguchi method collapses data from many (circuit simulator) runs into his so called "signal-to...designable parameters. We give a circuit example where the Taguchi objectives are met with about two-thirds fewer runs than [9]. The Taguchi method is...Chapter 5 we applied the circuit performance modeling method to achieve off-line quality control. The Taguchi method for off-line is reviewed. Taguchi’s

  15. Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.

    1995-01-01

    Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.

  16. Hydrometallurgical Extraction of Vanadium from Mechanically Milled Oil-Fired Fly Ash: Analytical Process Optimization by Using Taguchi Design Method

    NASA Astrophysics Data System (ADS)

    Parvizi, Reza; Khaki, Jalil Vahdati; Moayed, Mohammad Hadi; Ardani, Mohammad Rezaei

    2012-12-01

    In this study, the Taguchi design method was employed to determine the optimum experimental parameters in extraction of vanadium by NaOH leaching of oil-fired fly. Prior to designed experiments, the raw precipitates were mechanicallly milled using a high-energy planetary ball mill. Experimental parameters were investigated as follows: mechanical milling (MM) times (2 and 5 hours), NaOH (1 and 2 molar concentration) as reaction solution (RS), powder to solution ( P/ S) ratios (100/400 and 100/600 mg/mL), temperature ( T) of reaction system (303 K and 333 K [30 °C and 60 °C]), stirring times (ST) of reaction media (4 and 12 hours), stirring speed (SS) being adjusted to 400 and 600 rpm, and rinsing times (RT) of remained filtrates (1 and 3 hours). Statistical analysis of signal-to-noise ratio followed by analysis of variance was performed in order to estimate the optimum levels and their relative contributions. Data analysis is carried out using L8 orthogonal array consisting of seven parameters each with two levels. The optimum conditions were MM1 (3 hours), RS2 (2 molar NaOH), P/ S2 (100/600 mg/mL), T2 (333 K [60 °C]), ST2 (12 hours), SS1 (400 rpm), and RT1 (1 hour). Finally, from environmental and economical points of view, the process is faster and better organized by employing this analytical design method.

  17. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  18. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  19. Application of Taguchi method in optimization of cervical ring cage.

    PubMed

    Yang, Kai; Teo, Ee-Chon; Fuss, Franz Konstantin

    2007-01-01

    The Taguchi method is a statistical approach to overcome the limitation of the factorial and fractional factorial experiments by simplifying and standardizing the fractional factorial design. The objective of the current study is to illustrate the procedures and strengths of the Taguchi method in biomechanical analysis by using a case study of a cervical ring cage optimization. A three-dimensional finite element (FE) model of C(5)-C(6) with a generic cervical ring cage inserted was modelled. Taguchi method was applied in the optimization of the cervical ring cage in material property and dimensions for producing the lowest stress on the endplate to reduce the risk of cage subsidence, as in the following steps: (1) establishment of objective function; (2) determination of controllable factors and their levels; (3) identification of uncontrollable factors and test conditions; (4) design of Taguchi crossed array layout; (5) execution of experiments according to trial conditions; (6) analysis of results; (7) determination of optimal run; (8) confirmation of optimum run. The results showed that a cage with larger width, depth and wall thickness can produce the lower von Mises stress under various conditions. The contribution of implant materials is found trivial. The current case study illustrates that the strengths of the Taguchi method lie in (1) consistency in experimental design and analysis; (2) reduction of time and cost of experiments; (3) robustness of performance with removing the noise factors. The Taguchi method will have a great potential application in biomechanical field when factors of the issues are at discrete level.

  20. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.

  1. Evaluation of contributions of orthodontic mini-screw design factors based on FE analysis and the Taguchi method.

    PubMed

    Lin, Chun-Li; Yu, Jian-Hong; Liu, Heng-Liang; Lin, Chih-Hao; Lin, Yang-Sung

    2010-08-10

    This study determines the relative effects of changes in bone/mini-screw osseointegration and mini-screw design factors (length, diameter, thread shape, thread depth, material, head diameter and head exposure length) on the biomechanical response of a single mini-screw insertion. Eighteen CAD and finite element (FE) models corresponding to a Taguchi L(18) array were constructed to perform numerical simulations to simulate mechanical responses of a mini-screw placed in a cylindrical bone. The Taguchi method was employed to determine the significance of each design factor in controlling strain. Simulation results indicated that mini-screw material, screw exposure length and screw diameter were the major factors affecting bone strain, with percentage contributions of 63%, 24% and 7%, respectively. Bone strain decreased obviously when screw material had the high elastic modulus of stainless/titanium alloys, a small exposure length and a large diameter. Other factors had no significant on bone strain. The FE analysis combined with the Taguchi method efficiently identified the relative contributions of several mini-screw design factors, indicating that using a strong stainless/titanium alloys as screw material is advantageous, and increase in mechanical stability can be achieved by reducing the screw exposure length. Simulation results also revealed that mini-screw and bone surface contact can provide sufficient mechanical retention to perform immediately load in clinical treatment.

  2. Indole-3-acetic acid (IAA) production in symbiotic and non-symbiotic nitrogen-fixing bacteria and its optimization by Taguchi design.

    PubMed

    Shokri, Dariush; Emtiazi, Giti

    2010-09-01

    Production of Indole-3-acetic acid (IAA) in 35 different symbiotic and non-symbiotic nitrogen-fixing bacteria strains isolated from soil and plant roots was studied and assayed by chromatography and colorimetric methods. These bacteria included Agrobacterium, Paenibacillus, Rhizobium, Klebsiella oxytoca, and Azotobacter. The best general medium and synergism effects of isolates for IAA production were investigated. Effects of different variables containing physical parameters and key media components and optimization of condition for IAA production were performed using the Design of Experiments. Qualitek-4 (W32b) software for automatic design and analysis of the experiments, both based on Taguchi method was used. The results showed that Rhizobium strains, symbiotic, and Paenibacillus non-symbiotic bacteria yielded the highest concentrations of IAA (in the range of 5.23-0.27 and 4.90-0.19 ppm IAA/mg biomass, respectively) and IAA production was increased by synergism effect of them. Yeast Extract Mannitol medium supplemented with L-tryptophan was the best general medium for IAA production. The analysis of experimental data using Taguchi method indicated that nitrogen source is very prominent variable in affecting the yield and mannitol as carbon source, potassium nitrate (1%), and L-tryptophan (3 g/l) as nitrogen sources after 72-h incubation at 30 degrees C were the optimum conditions for production of IAA. 5.89 ppm IAA/mg biomass was produced under these optimal conditions.

  3. Cytotoxic effects of Reactive Blue 33 on Allium cepa determined using Taguchi's L₈ orthogonal array.

    PubMed

    Al, Gonca; Özdemir, Utkan; Aksoy, Özlem

    2013-12-01

    In this study, Taguchi L₈ experimental design was applied to determine cytotoxic effects of Reactive Blue 33, which is the most toxic azo reactive dye species, on Allium cepa. With this aim, A. cepa test system was performed to achieve targeted experimental design with three factors (concentration of dye, pH and volume) in two different levels. Toxic conditions were determined considering calculated signal-to-noise ratios. "Smaller is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain lower root lengths. In the work, toxic effects of azo dye were also predicted by using the Taguchi method. Taguchi model showed that experimental and predicted values were closer to each other demonstrating the success of Taguchi approach.

  4. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  5. Taguchi Experimental Design for Cleaning PWAs with Ball Grid Arrays

    NASA Technical Reports Server (NTRS)

    Bonner, J. K.; Mehta, A.; Walton, S.

    1997-01-01

    Ball grid arrays (BGAs), and other area array packages, are becoming more prominent as a way to increase component pin count while avoiding the manufacturing difficulties inherent in processing quad flat packs (QFPs)...Cleaning printed wiring assemblies (PWAs) with BGA components mounted on the surface is problematic...Currently, a low flash point semi-aqueous material, in conjunction with a batch cleaning unit, is being used to clean PWAs. The approach taken at JPL was to investigate the use of (1) semi-aqueous materials having a high flash point and (2) aqueous cleaning involving a saponifier.

  6. Taguchi versus Full Factorial Design to Determine the Influence of Process Parameters on the Impact Forces Produced by Water Jets Used in Sewer Cleaning

    NASA Astrophysics Data System (ADS)

    Medan, N.; Banica, M.

    2016-11-01

    The regular cleaning of the materials deposed in sewer networks is realized, especially with equipment that uses high pressure water jets. The functioning of this equipment is dependent on certain process parameters that can vary, causing variations of the impact forces. The impact force directly affects the cleaning of sewer systems. In order to determine the influence of the process parameters on the impact forces produced by water jets the method of research used is the experiment. The research methods used is that Taguchi design and full factorial design. For the experimental determination of the impact forces a stand for generating water jets and a device for measuring the forces of impact are used. The processing of data is carried out using the Software Minitab 17.

  7. An Exploratory Exercise in Taguchi Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System

    NASA Technical Reports Server (NTRS)

    Deal, Don E.

    1991-01-01

    The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of Taguchi analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the Taguchi statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the Taguchi experiment array data suggested acceptable or close agreement between the two except at the predicted optimum

  8. Use of Taguchi Design of Experiments to Determine ALPLS Ascent Delta-5 Sensitivities and Total Mass Sensitivities to Release Conditions and Vehicle Parameters

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector Ramon

    1991-01-01

    The objective of this study is to evaluate the use of Taguchi's Design of Experiment Methods to improve the effectiveness of this and future parametric studies. Taguchi Methods will be applied in addition to the typical approach to provide a mechanism for comparing the results and the cost or effort necessary to complete the studies. It is anticipated that results of this study should include an improved systematic analysis process, an increase in information obtained at a lower cost, and a more robust, cost effective vehicle design.

  9. Evaluating space transportation sensitivities with Taguchi methods

    NASA Technical Reports Server (NTRS)

    Brown, Norman S.; Patel, Saroj

    1992-01-01

    The lunar and Mars transportation system sensitivities and their effect on cost is discussed with reference to several design concepts using Taguchi analysis. The general features of the approach are outlined, and the selected Taguchi matrix (L18) is described. The modeling results are displayed in a Design of Experiments format to aid the evaluation of sensitivities.

  10. Evaluation of B. subtilis SPB1 biosurfactants' potency for diesel-contaminated soil washing: optimization of oil desorption using Taguchi design.

    PubMed

    Mnif, Inès; Sahnoun, Rihab; Ellouze-Chaabouni, Semia; Ghribi, Dhouha

    2014-01-01

    Low solubility of certain hydrophobic soil contaminants limits remediation process. Surface-active compounds can improve the solubility and removal of hydrophobic compounds from contaminated soils and, consequently, their biodegradation. Hence, this paper aims to study desorption efficiency of oil from soil of SPB1 lipopeptide biosurfactant. The effect of different physicochemical parameters on desorption potency was assessed. Taguchi experimental design method was applied in order to enhance the desorption capacity and establish the best washing parameters. Mobilization potency was compared to those of chemical surfactants under the newly defined conditions. Better desorption capacity was obtained using 0.1% biosurfacatnt solution and the mobilization potency shows great tolerance to acidic and alkaline pH values and salinity. Results show an optimum value of oil removal from diesel-contaminated soil of about 87%. The optimum washing conditions for surfactant solution volume, biosurfactant concentration, agitation speed, temperature, and time were found to be 12 ml/g of soil, 0.1% biosurfactant, 200 rpm, 30 °C, and 24 h, respectively. The obtained results were compared to those of SDS and Tween 80 at the optimal conditions described above, and the study reveals an effectiveness of SPB1 biosurfactant comparable to the reported chemical emulsifiers. (1) The obtained findings suggest (a) the competence of Bacillus subtilis biosurfactant in promoting diesel desorption from soil towards chemical surfactants and (b) the applicability of this method in decontaminating crude oil-contaminated soil and, therefore, improving bioavailability of hydrophobic compounds. (2) The obtained findings also suggest the adequacy of Taguchi design in promoting process efficiency. Our findings suggest that preoptimized desorption process using microbial-derived emulsifier can contribute significantly to enhancement of hydrophobic pollutants' bioavailability. This study can be

  11. Wear performance optimization of stir cast Al-TiB2 metal matrix composites using Taguchi design of experiments

    NASA Astrophysics Data System (ADS)

    Poria, Suswagata; Sahoo, Prasanta; Sutradhar, Goutam

    2016-09-01

    The present study outlines the use of Taguchi parameter design to minimize the wear performance of Al-TiB2 metal matrix composites by optimizing tribological process parameters. Different weight percentages of micro-TiB2 powders with average sizes of 5-40 micron are incorporated into molten LM4 aluminium matrix by stir casting method. The wear performance of Al-TiB2 composites is evaluated in a block-on-roller type Multitribo tester at room temperature. Three parameters viz. weight percentage of TiB2, load and speed are considered with three levels each at the time of experiment. A L27 orthogonal array is used to carry out experiments accommodating all the factors and their levels including their interaction effects. Optimal combination of parameters for wear performance is obtained by Taguchi analysis. Analysis of variance (ANOVA) is used to find out percentage contribution of each parameter and their interaction also on wear performance. Weight percentage of TiB2 is forced to be the most effective parameter in controlling wear behaviour of Al-TiB2 metal matrix composite.

  12. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  13. Design of retrodiffraction gratings for polarization-insensitive and polarization-sensitive characteristics by using the Taguchi method.

    PubMed

    Lee, ChaBum; Hane, Kazuhiro; Kim, WanSoo; Lee, Sun-Kyu

    2008-06-20

    We present the design of retrodiffraction gratings that utilize total internal reflection (TIR) in a lamellar configuration to achieve high performance for both TE and TM polarized light and polarization-sensitive performance for gratings behaving as polarizer filters; the design was based on rigorous coupled wave analysis (RCWA) and the Taguchi method. The components can thus be fabricated from a single dielectric material and do not have to be coated with a metallic or dielectric film layer to enhance the reflectance. The effects of the structural and optical parameters of lamellar gratings were investigated, and the TIR gratings in a lamellar configuration were structurally and optically optimized in terms of the signal-to-noise ratio (S/N) and a statistical analysis of variance (ANOVA) of the refractive index, grating period, filling factor, and grating depth as control factors and the estimated efficiency by RCWA as a noise factor. For more accurate robustness, a two-step optimization process was used for each purpose. For TIR gratings designed to perform similarly for TE and TM incident polarization, the -1st-order efficiencies were estimated to be up to 92.0% and 88.5% for TE and TM polarization, respectively. Also, for the TIR gratings designed to achieve polarization-sensitive performance when behaving as a polarizer filters, the -1st-order diffraction efficiencies for TE and TM polarization were estimated to be up to 95.5% and 2.7%, respectively. From these analysis results, it was confirmed that the Taguchi method shows feasibility for an optimization approach to a technique for designing optical devices.

  14. Taguchi design and flower pollination algorithm application to optimize the shrinkage of triaxial porcelain containing palm oil fuel ash

    NASA Astrophysics Data System (ADS)

    Zainudin, A.; Sia, C. K.; Ong, P.; Narong, O. L. C.; Nor, N. H. M.

    2017-01-01

    In the preparation of triaxial porcelain from Palm Oil Fuel Ash (POFA), a new parameter variable must be determined. The parameters involved are the particle size of POFA, percentage of POFA in triaxial porcelain composition, moulding pressure, sintering temperature and soaking time. Meanwhile, the shrinkage is the dependent variable. The optimization process was investigated using a hybrid Taguchi design and flower pollination algorithm (FPA). The interaction model of shrinkage was derived from regression analysis and found that the shrinkage is highly dependent on the sintering temperature followed by POFA composition, moulding pressure, POFA particle size and soaking time. The interaction between sintering temperature and soaking time highly affects the shrinkage. From the FPA process, targeted shrinkage approaching zero values were predicted for 142 μm particle sizes of POFA, 22.5 wt% of POFA, 3.4 tonne moulding pressure, 948.5 °C sintering temperature and 264 minutes soaking time.

  15. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  16. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector R.

    1992-01-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  17. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Astrophysics Data System (ADS)

    Carrasco, Hector R.

    1992-12-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  18. Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de Taguchi

    NASA Astrophysics Data System (ADS)

    Meseguer Valdenebro, Jose Luis

    Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical methods most commonly used today are three: finite difference method, finite element method and finite volume method. The most widely used numerical method for the modeling of welded joints is the finite element method because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element method as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the method of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the Taguchi method for welding parameters in order to obtain an

  19. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design

    PubMed Central

    Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the

  20. Workbook for Taguchi Methods for Product Quality Improvement.

    ERIC Educational Resources Information Center

    Zarghami, Ali; Benbow, Don

    Taguchi methods are methods of product quality improvement that analyze major contributions and how they can be controlled to reduce variability of poor performance. In this approach, knowledge is used to shorten testing. Taguchi methods are concerned with process improvement rather than with process measurement. This manual is designed to be used…

  1. Experimental design and husbandry.

    PubMed

    Festing, M F

    1997-01-01

    Rodent gerontology experiments should be carefully designed and correctly analyzed so as to provide the maximum amount of information for the minimum amount of work. There are five criteria for a "good" experimental design. These are applicable both to in vivo and in vitro experiments: (1) The experiment should be unbiased so that it is possible to make a true comparison between treatment groups in the knowledge that no one group has a more favorable "environment." (2) The experiment should have high precision so that if there is a true treatment effect there will be a good chance of detecting it. This is obtained by selecting uniform material such as isogenic strains, which are free of pathogenic microorganisms, and by using randomized block experimental designs. It can also be increased by increasing the number of observations. However, increasing the size of the experiment beyond a certain point will only marginally increase precision. (3) The experiment should have a wide range of applicability so it should be designed to explore the sensitivity of the observed experimental treatment effect to other variables such as the strain, sex, diet, husbandry, and age of the animals. With in vitro data, variables such as media composition and incubation times may also be important. The importance of such variables can often be evaluated efficiently using "factorial" experimental designs, without any substantial increase in the overall number of animals. (4) The experiment should be simple so that there is little chance of groups becoming muddled. Generally, formal experimental designs that are planned before the work starts should be used. (5) The experiment should provide the ability to calculate uncertainty. In other words, it should be capable of being statistically analyzed so that the level of confidence in the results can be quantified.

  2. Taguchi methods in electronics: A case study

    NASA Technical Reports Server (NTRS)

    Kissel, R.

    1992-01-01

    Total Quality Management (TQM) is becoming more important as a way to improve productivity. One of the technical aspects of TQM is a system called the Taguchi method. This is an optimization method that, with a few precautions, can reduce test effort by an order of magnitude over conventional techniques. The Taguchi method is specifically designed to minimize a product's sensitivity to uncontrollable system disturbances such as aging, temperature, voltage variations, etc., by simultaneously varying both design and disturbance parameters. The analysis produces an optimum set of design parameters. A 3-day class on the Taguchi method was held at the Marshall Space Flight Center (MSFC) in May 1991. A project was needed as a follow-up after the class was over, and the motor controller was selected at that time. Exactly how to proceed was the subject of discussion for some months. It was not clear exactly what to measure, and design kept getting mixed with optimization. There was even some discussion about why the Taguchi method should be used at all.

  3. Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the Taguchi method.

    PubMed

    Kim, Dongcheol; Rhee, Sehun

    2002-01-01

    CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.

  4. Taguchi Method Applied in Optimization of Shipley 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, Allan; Wiberg, Dean V.; Blosiu, Julian

    1997-01-01

    Taguchi Methods of Robust Design Presents a way to optimize output process performance through organized experiments, by using orthogonal arrays for the evaluation of the process controlleable parameters.

  5. Application of the Taguchi method in poultry science: estimation of the in vitro optimum intrinsic phytase activity of rye, wheat and barley.

    PubMed

    Sedghi, M; Golian, A; Esmaeilipour, O; Van Krimpen, M M

    2014-01-01

    1. In poultry investigations, the main interest is often to study the effects of many factors simultaneously. Two or three level factorial designs are the most commonly used for this type of investigation. However, it is often too costly to perform when number of factors increase. So a fractional factorial design, which is a subset or a fraction of a full factorial design, is an alternative. The Taguchi method has been proposed for simplifying and standardising fractional factorial designs. 2. An experiment was conducted to evaluate the applicability of the Taguchi method to optimise in vitro intrinsic phytase activity (IPA) of rye, wheat and barley under different culture conditions. 3. In order to have a solid base for judging the suitability of the Taguchi method, the results of the Taguchi method were compared with those of an experiment that was conducted as a 3(4) full factorial arrangement with three feed ingredients (rye, wheat and barley), three temperatures (20°C, 38°C and 55°C), three pH values (3.0, 5.5 and 8.0) and three incubation times (30, 60 and 120 min), with two replicates per treatment. 4. After data collection, a Taguchi L 9 (3(4)) orthogonal array was used to estimate the effects of different factors on the IPA, based on a subset of only 9 instead of 81 treatments. The data were analysed with both Taguchi and full factorial methods and the main effects and the optimal combinations of these 4 factors were obtained for each method. 5. The results indicated that according to both the full factorial experimental design and the Taguchi method, the optimal culture conditions were obtained with the following combination: rye, pH = 3, temperature = 20 °C and time of incubation = 30 min. The comparison between the Taguchi and full factorial results showed that the Taguchi method is a sufficient and resource saving alternative to the full factorial design in poultry science.

  6. Optimization of inulinase production from low cost substrates using Plackett-Burman and Taguchi methods.

    PubMed

    Abd El Aty, Abeer A; Wehaidy, Hala R; Mostafa, Faten A

    2014-02-15

    Four marine-derived fungal isolates were screened for the production of inulinase enzyme from low cost substrates under solid state fermentation (SSF), one of them identified as Aspergillus terreus showed the highest inulinase activity using artichoke leaves as a solid substrate. Sequential optimization strategy, based on statistical experimental designs was employed to optimize the composition of the medium, including Plackett-Burman and Taguchi's (L9 3(4)) orthogonal array designs. Under the optimized conditions, inulinase activity (21.058 U/gds) reached the predicted maximum activity derived from the taguchi methodology, which increased about 4.79-folds the initial production medium. Fructose was produced, as an end product of inulin hydrolysis proving that the enzyme produced was exoinulinase. The marine-derived A. terreus is suggested as a new potential candidate for industrial enzymatic production of fructose from low cost substrate containing inulin as an economic source.

  7. Response surface methodology and process optimization of sustained release pellets using Taguchi orthogonal array design and central composite design

    PubMed Central

    Singh, Gurinder; Pai, Roopa S.; Devi, V. Kusum

    2012-01-01

    Furosemide is a powerful diuretic and antihypertensive drug which has low bioavailability due to hepatic first pass metabolism and has a short half-life of 2 hours. To overcome the above drawback, the present study was carried out to formulate and evaluate sustained release (SR) pellets of furosemide for oral administration prepared by extrusion/spheronization. Drug Coat L-100 was used within the pellet core along with microcrystalline cellulose as the diluent and concentration of selected binder was optimized to be 1.2%. The formulation was prepared with drug to polymer ratio 1:3. It was optimized using Design of Experiments by employing a 32 central composite design that was used to systematically optimize the process parameters combined with response surface methodology. Dissolution studies were carried out with USP apparatus Type I (basket type) in both simulated gastric and intestinal pH. The statistical technique, i.e., the two-tailed paired t test and one-way ANOVA of in vitro data has proposed that there was very significant (P≤0.05) difference in dissolution profile of furosemide SR pellets when compared with pure drug and commercial product. Validation of the process optimization study indicated an extremely high degree of prognostic ability. The study effectively undertook the development of optimized process parameters of pelletization of furosemide pellets with tremendous SR characteristics. PMID:22470891

  8. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  9. Application of Taguchi methods to dual mixture ratio propulsion system optimization for SSTO vehicles

    NASA Astrophysics Data System (ADS)

    Stanley, Douglas O.; Unal, Resit; Joyner, C. R.

    1992-01-01

    The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.

  10. The Taguchi methodology as a statistical tool for biotechnological applications: a critical appraisal.

    PubMed

    Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J

    2008-04-01

    Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.

  11. Evaluation of Listeria monocytogenes survival in ice cream mixes flavored with herbal tea using Taguchi method.

    PubMed

    Ozturk, Ismet; Golec, Adem; Karaman, Safa; Sagdic, Osman; Kayacier, Ahmed

    2010-10-01

    In this study, the effects of the incorporation of some herbal teas at different concentrations into the ice cream mix on the population of Listeria monocytogenes were studied using Taguchi method. The ice cream mix samples flavored with herbal teas were prepared using green tea and sage at different concentrations. Afterward, fresh culture of L. monocytogenes was inoculated into the samples and the L. monocytogenes was counted at different storage periods. Taguchi method was used for experimental design and analysis. In addition, some physicochemical properties of samples were examined. Results suggested that there was some effect, although little, on the population of L. monocytogenes when herbal tea was incorporated into the ice cream mix. Additionally, the use of herbal tea caused a decrease in the pH values of the samples and significant changes in the color values.

  12. Experimental study of optimal self compacting concrete with spent foundry sand as partial replacement for M-sand using Taguchi approach

    NASA Astrophysics Data System (ADS)

    Nirmala, D. B.; Raviraj, S.

    2016-06-01

    This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.

  13. A new insight into the Taguchi method.

    PubMed

    Leysi-Derilou, Y; Antony, J

    To obtain high quality products at low cost and in a short time is an economical and technological challenge to today's engineering community. Design of Experiments based on the Taguchi approach is a powerful technique to attain this objective. In some processes, it is necessary to consider not only two factors but also the ratio of their levels 'as a factor.' This paper introduces a new look at the Taguchi method that makes it possible, by choosing the proper levels, to evaluate the ratio of two factors as a new factor in the same orthogonal array. An experiment to study four three-level factors was designed, and a case study is presented to illustrate the ratio of the two three-level factors as a new factor using the same L9 orthogonal array.

  14. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process.

  15. Optimisation of Lime-Soda process parameters for reduction of hardness in aqua-hatchery practices using Taguchi methods.

    PubMed

    Yavalkar, S P; Bhole, A G; Babu, P V Vijay; Prakash, Chandra

    2012-04-01

    This paper presents the optimisation of Lime-Soda process parameters for the reduction of hardness in aqua-hatchery practices in the context of M. rosenbergii. The fresh water in the development of fisheries needs to be of suitable quality. Lack of desirable quality in available fresh water is generally the confronting restraint. On the Indian subcontinent, groundwater is the only source of raw water, having varying degree of hardness and thus is unsuitable for the fresh water prawn hatchery practices (M. rosenbergii). In order to make use of hard water in the context of aqua-hatchery, Lime-Soda process has been recommended. The efficacy of the various process parameters like lime, soda ash and detention time, on the reduction of hardness needs to be examined. This paper proposes to determine the parameter settings for the CIFE well water, which is pretty hard by using Taguchi experimental design method. Orthogonal Arrays of Taguchi, Signal-to-Noise Ratio, the analysis of variance (ANOVA) have been applied to determine their dosage and analysed for their effect on hardness reduction. The tests carried out with optimal levels of Lime-Soda process parameters confirmed the efficacy of the Taguchi optimisation method. Emphasis has been placed on optimisation of chemical doses required to reduce the total hardness using Taguchi method and ANOVA, to suit the available raw water quality for aqua-hatchery practices, especially for fresh water prawn M. rosenbergii.

  16. Parameter optimization of the fungicide (Vapam) sorption onto soil modified with clinoptilolite by Taguchi method.

    PubMed

    Azizi, Seyed N; Asemi, Neda

    2010-11-01

    This study employs the Taguchi optimization methodology to optimize the effective parameters for the pesticide (Vapam) sorption onto soil modified with natural zeolite (clinoptilolite). The experimental factors and their ranges chosen for determination of the effective parameters were: initial Vapam concentration (0.4-1.6 mg/L), initial pH of the pesticide solution (2-12), the percentage of clinoptilolite in the modified soil (0-6 %), temperature (15-35°C) and shaking time (2-24 h). The orthogonal array (OA) L(16) and the bigger the better response category of the Taguchi method were selected to determine the optimum conditions: initial Vapam concentration (1.2 mg/L), initial pH of the pesticide solution (2), the percentage of clinoptilolite in the modified soil (4 %), temperature (15°C) and shaking time (2 h). The results showed that in comparison with other parameters, the initial Vapam concentration was the most effective one for the sorption of this pesticide onto soil, modified with clinoptilolite. Moreover, after determining the optimum levels of the sorption process parameters, confirmation experiments were performed to prove the effectiveness of the Taguchi's experimental design methodology.

  17. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  18. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  19. Applying Taguchi design and large-scale strategy for mycosynthesis of nano-silver from endophytic Trichoderma harzianum SYA.F4 and its application against phytopathogens

    PubMed Central

    EL-Moslamy, Shahira H.; Elkady, Marwa F.; Rezk, Ahmed H.; Abdel-Fattah, Yasser R.

    2017-01-01

    Development of reliable and low-cost requirement for large-scale eco-friendly biogenic synthesis of metallic nanoparticles is an important step for industrial applications of bionanotechnology. In the present study, the mycosynthesis of spherical nano-Ag (12.7 ± 0.8 nm) from extracellular filtrate of local endophytic T. harzianum SYA.F4 strain which have interested mixed bioactive metabolites (alkaloids, flavonoids, tannins, phenols, nitrate reductase (320 nmol/hr/ml), carbohydrate (25 μg/μl) and total protein concentration (2.5 g/l) was reported. Industrial mycosynthesis of nano-Ag can be induced with different characters depending on the fungal cultivation and physical conditions. Taguchi design was applied to improve the physicochemical conditions for nano-Ag production, and the optimum conditions which increased its mass weight 3 times larger than a basal condition were as follows: AgNO3 (0.01 M), diluted reductant (10 v/v, pH 5) and incubated at 30 °C, 200 rpm for 24 hr. Kinetic conversion rates in submerged batch cultivation in 7 L stirred tank bioreactor on using semi-defined cultivation medium was as follows: the maximum biomass production (Xmax) and maximum nano-Ag mass weight (Pmax) calculated (60.5 g/l and 78.4 g/l respectively). The best nano-Ag concentration that formed large inhibition zones was 100 μg/ml which showed against A.alternate (43 mm) followed by Helminthosporium sp. (35 mm), Botrytis sp. (32 mm) and P. arenaria (28 mm). PMID:28349997

  20. Near Field and Far Field Effects in the Taguchi-Optimized Design of AN InP/GaAs-BASED Double Wafer-Fused Mqw Long-Wavelength Vertical-Cavity Surface-Emitting Laser

    NASA Astrophysics Data System (ADS)

    Menon, P. S.; Kandiah, K.; Mandeep, J. S.; Shaari, S.; Apte, P. R.

    Long-wavelength VCSELs (LW-VCSEL) operating in the 1.55 μm wavelength regime offer the advantages of low dispersion and optical loss in fiber optic transmission systems which are crucial in increasing data transmission speed and reducing implementation cost of fiber-to-the-home (FTTH) access networks. LW-VCSELs are attractive light sources because they offer unique features such as low power consumption, narrow beam divergence and ease of fabrication for two-dimensional arrays. This paper compares the near field and far field effects of the numerically investigated LW-VCSEL for various design parameters of the device. The optical intensity profile far from the device surface, in the Fraunhofer region, is important for the optical coupling of the laser with other optical components. The near field pattern is obtained from the structure output whereas the far-field pattern is essentially a two-dimensional fast Fourier Transform (FFT) of the near-field pattern. Design parameters such as the number of wells in the multi-quantum-well (MQW) region, the thickness of the MQW and the effect of using Taguchi's orthogonal array method to optimize the device design parameters on the near/far field patterns are evaluated in this paper. We have successfully increased the peak lasing power from an initial 4.84 mW to 12.38 mW at a bias voltage of 2 V and optical wavelength of 1.55 μm using Taguchi's orthogonal array. As a result of the Taguchi optimization and fine tuning, the device threshold current is found to increase along with a slight decrease in the modulation speed due to increased device widths.

  1. Optimization of glucose formation in karanja biomass hydrolysis using Taguchi robust method.

    PubMed

    Radhakumari, M; Ball, Andy; Bhargava, Suresh K; Satyavathi, B

    2014-08-01

    The main objective of the present study is aimed to optimize the process parameters for the production of glucose from karanja seed cake. The Taguchi robust design method with L9 orthogonal array was applied to optimize hydrolysis reaction conditions and maximize sugar yield. Effect of temperature, acid concentration, and acid to cake weight ratio were considered as the main influencing factors which effects the percentage of glucose and amount of glucose formed. The experimental results indicated that acid concentration and liquid to solid ratio had a principal effect on the amount of glucose formed when compared to that of temperature. The maximum glucose formed was 245 g/kg extractive free cake.

  2. Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with Taguchi method

    NASA Astrophysics Data System (ADS)

    Mehedi, H.-A.; Baudrillart, B.; Alloyeau, D.; Mouhoub, O.; Ricolleau, C.; Pham, V. D.; Chacon, C.; Gicquel, A.; Lagoute, J.; Farhat, S.

    2016-08-01

    This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700-850 °C), molar concentration of methane (2%-20%), growth time (30-90 s), and microwave power (300-400 W) on graphene thickness and defect density is investigated using Taguchi method which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline quality and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2-7 high quality graphene layers.

  3. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  4. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  5. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    PubMed

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.

  6. Mixing behavior of the rhombic micromixers over a wide Reynolds number range using Taguchi method and 3D numerical simulations.

    PubMed

    Chung, C K; Shih, T R; Chen, T C; Wu, B H

    2008-10-01

    A planar micromixer with rhombic microchannels and a converging-diverging element has been systematically investigated by the Taguchi method, CFD-ACE simulations and experiments. To reduce the footprint and extend the operation range of Reynolds number, Taguchi method was used to numerically study the performance of the micromixer in a L(9) orthogonal array. Mixing efficiency is prominently influenced by geometrical parameters and Reynolds number (Re). The four factors in a L(9) orthogonal array are number of rhombi, turning angle, width of the rhombic channel and width of the throat. The degree of sensitivity by Taguchi method can be ranked as: Number of rhombi > Width of the rhombic channel > Width of the throat > Turning angle of the rhombic channel. Increasing the number of rhombi, reducing the width of the rhombic channel and throat and lowering the turning angle resulted in better fluid mixing efficiency. The optimal design of the micromixer in simulations indicates over 90% mixing efficiency at both Re > or = 80 and Re < or = 0.1. Experimental results in the optimal simulations are consistent with the simulated one. This planar rhombic micromixer has simplified the complex fabrication process of the multi-layer or three-dimensional micromixers and improved the performance of a previous rhombic micromixer at a reduced footprint and lower Re.

  7. Application of the nonlinear, double-dynamic Taguchi method to the precision positioning device using combined piezo-VCM actuator.

    PubMed

    Liu, Yung-Tien; Fung, Rong-Fong; Wang, Chun-Chao

    2007-02-01

    In this research, the nonlinear, double-dynamic Taguchi method was used as design and analysis methods for a high-precision positioning device using the combined piezo-voice-coil motor (VCM) actuator. An experimental investigation into the effects of two input signals and three control factors were carried out to determine the optimum parametric configuration of the positioning device. The double-dynamic Taguchi method, which permits optimization of several control factors concurrently, is particularly suitable for optimizing the performance of a positioning device with multiple actuators. In this study, matrix experiments were conducted with L9(3(4)) orthogonal arrays (OAs). The two most critical processes for the optimization of positioning device are the identification of the nonlinear ideal function and the combination of the double-dynamic signal factors for the ideal function's response. The driving voltage of the VCM and the waveform amplitude of the PZT actuator are combined into a single quality characteristic to evaluate the positioning response. The application of the double-dynamic Taguchi method, with dynamic signal-to-noise ratio (SNR) and L9(3(4)) OAs, reduced the number of necessary experiments. The analysis of variance (ANOVA) was applied to set the optimum parameters based on the high-precision positioning process.

  8. Taguchi Optimization of Pulsed Current GTA Welding Parameters for Improved Corrosion Resistance of 5083 Aluminum Welds

    NASA Astrophysics Data System (ADS)

    Rastkerdar, E.; Shamanian, M.; Saatchi, A.

    2013-04-01

    In this study, the Taguchi method was used as a design of experiment (DOE) technique to optimize the pulsed current gas tungsten arc welding (GTAW) parameters for improved pitting corrosion resistance of AA5083-H18 aluminum alloy welds. A L9 (34) orthogonal array of the Taguchi design was used, which involves nine experiments for four parameters: peak current ( P), base current ( B), percent pulse-on time ( T), and pulse frequency ( F) with three levels was used. Pitting corrosion resistance in 3.5 wt.% NaCl solution was evaluated by anodic polarization tests at room temperature and calculating the width of the passive region (∆ E pit). Analysis of variance (ANOVA) was performed on the measured data and S/ N (signal to noise) ratios. The "bigger is better" was selected as the quality characteristic (QC). The optimum conditions were found as 170 A, 85 A, 40%, and 6 Hz for P, B, T, and F factors, respectively. The study showed that the percent pulse-on time has the highest influence on the pitting corrosion resistance (50.48%) followed by pulse frequency (28.62%), peak current (11.05%) and base current (9.86%). The range of optimum ∆ E pit at optimum conditions with a confidence level of 90% was predicted to be between 174.81 and 177.74 mVSCE. Under optimum conditions, the confirmation test was carried out, and the experimental value of ∆ E pit of 176 mVSCE was in agreement with the predicted value from the Taguchi model. In this regard, the model can be effectively used to predict the ∆ E pit of pulsed current gas tungsten arc welded joints.

  9. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  10. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  11. Optimization of the Machining parameter of LM6 Alminium alloy in CNC Turning using Taguchi method

    NASA Astrophysics Data System (ADS)

    Arunkumar, S.; Muthuraman, V.; Baskaralal, V. P. M.

    2017-03-01

    Due to widespread use of highly automated machine tools in the industry, manufacturing requires reliable models and methods for the prediction of output performance of machining process. In machining of parts, surface quality is one of the most specified customer requirements. In order for manufactures to maximize their gains from utilizing CNC turning, accurate predictive models for surface roughness must be constructed. The prediction of optimum machining conditions for good surface finish plays an important role in process planning. This work deals with the study and development of a surface roughness prediction model for machining LM6 aluminum alloy. Two important tools used in parameter design are Taguchi orthogonal arrays and signal to noise ratio (S/N). Speed, feed, depth of cut and coolant are taken as process parameter at three levels. Taguchi’s parameters design is employed here to perform the experiments based on the various level of the chosen parameter. The statistical analysis results in optimum parameter combination of speed, feed, depth of cut and coolant as the best for obtaining good roughness for the cylindrical components. The result obtained through Taguchi is confirmed with real time experimental work.

  12. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2016-08-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  13. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  14. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  15. Modified artificial diet for rearing of tobacco budworm, Helicoverpa armigera, using the Taguchi method and Derringer's desirability function.

    PubMed

    Assemi, H; Rezapanah, M; Vafaei-Shoushtari, R; Mehrvar, A

    2012-01-01

    With the aim to improve the mass rearing feasibility of tobacco budworm, Helicoverpa armigera Hübner (Lepidoptera: Noctuidae), design of experimental methodology using Taguchi orthogonal array was applied. To do so, the effect of 16 ingredients of an artificial diet including bean, wheat germ powder, Nipagin, ascorbic acid, formaldehyde, oil, agar, distilled water, ascorbate, yeast, chloramphenicol, benomyl, penicillin, temperature, humidity, and container size on some biological characteristics of H. armigera was evaluated. The selected 16 factors were considered at two levels (32 experiments) in the experimental design. Among the selected factors, penicillin, container size, formaldehyde, chloramphenicol, wheat germ powder, and agar showed significant effect on the mass rearing performance. Derringer's desirability function was used for simultaneous optimization of mass rearing of tobacco budworm, H. armigera, on a modified artificial diet. Derived optimum operating conditions obtained by Derringer's desirability function and Taguchi methodology decreased larval period from 19 to 15.5 days (18.42 % improvement), decreased the pupal period from 12.29 to 11 days (10.49 % improvement), increased the longevity of adults from 14.51 to 21 days (44.72 % improvement), increased the number of eggs/female from 211.21 to 260, and increased egg hatchability from 54.2% to 72% (32.84 % improvement). The proposed method facilitated a systematic mathematical approach with a few well-defined experimental sets.

  16. Modified Artificial Diet for Rearing of Tobacco Budworm, Helicoverpa armigera, using the Taguchi Method and Derringer's Desirability Function

    PubMed Central

    Assemi, H.; Rezapanah, M.; Vafaei-Shoushtari, R.

    2012-01-01

    With the aim to improve the mass rearing feasibility of tobacco budworm, Helicoverpa armigera Hübner (Lepidoptera: Noctuidae), design of experimental methodology using Taguchi orthogonal array was applied. To do so, the effect of 16 ingredients of an artificial diet including bean, wheat germ powder, Nipagin, ascorbic acid, formaldehyde, oil, agar, distilled water, ascorbate, yeast, chloramphenicol, benomyl, penicillin, temperature, humidity, and container size on some biological characteristics of H. armigera was evaluated. The selected 16 factors were considered at two levels (32 experiments) in the experimental design. Among the selected factors, penicillin, container size, formaldehyde, chloramphenicol, wheat germ powder, and agar showed significant effect on the mass rearing performance. Derringer's desirability function was used for simultaneous optimization of mass rearing of tobacco budworm, H. armigera, on a modified artificial diet. Derived optimum operating conditions obtained by Derringer's desirability function and Taguchi methodology decreased larval period from 19 to 15.5 days (18.42 % improvement), decreased the pupal period from 12.29 to 11 days (10.49 % improvement), increased the longevity of adults from 14.51 to 21 days (44.72 % improvement), increased the number of eggs/female from 211.21 to 260, and increased egg hatchability from 54.2% to 72% (32.84 % improvement). The proposed method facilitated a systematic mathematical approach with a few well-defined experimental sets. PMID:23425103

  17. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  18. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  19. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  20. Rapid development of xylanase assay conditions using Taguchi methodology.

    PubMed

    Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath

    2016-11-01

    The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.

  1. Optimizing Aqua Splicer Parameters for Lycra-Cotton Core Spun Yarn Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Midha, Vinay Kumar; Hiremath, ShivKumar; Gupta, Vaibhav

    2015-10-01

    In this paper, optimization of the aqua splicer parameters viz opening time, splicing time, feed arm code (i.e. splice length) and duration of water joining was carried out for 37 tex lycra-cotton core spun yarn for better retained splice strength (RSS%), splice abrasion resistance (RYAR%) and splice appearance (RYA%) using Taguchi experimental design. It is observed that as opening time, splicing time and duration of water joining increase, the RSS% and RYAR% increases, whereas increase in feed arm code leads to decrease in both. The opening time and feed arm code do not have significant effect on RYA%. The optimum RSS% of 92.02 % was obtained at splicing parameters of 350 ms opening time, 180 ms splicing time, 65 feed arm code and 600 ms duration of water joining.

  2. Light Experimental Supercruiser Conceptual Design

    DTIC Science & Technology

    1976-07-01

    with a definitely related Government procurement operation , the United States Government thereby incurs no responsibility nor any obligation...PERFORMANCE (985-213) 144 77 LANDING PERFORMANCE 145 78 GLOBAL PERSISTENCE (985-213) 146 79 SPECIFIC EXCESS POWER - 1 g (985-213) 146 80 SPECIFIC EXCESS...MODEL 985-213 19.7 FEET ir i. 9.3 FEET fOINT DESIGN WEIGHTS • DESIGN MISSION 13,600 POUNDS • OVERLOAD MISSION 16.780 POUNDS • OPERATING WEIGHT

  3. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  4. Experimental Design: Review and Comment.

    DTIC Science & Technology

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  5. Experimental Design For Photoresist Characterization

    NASA Astrophysics Data System (ADS)

    Luckock, Larry

    1987-04-01

    In processing a semiconductor product (from discrete devices up to the most complex products produced) we find more photolithographic steps in wafer fabrication than any other kind of process step. Thus, the success of a semiconductor manufacturer hinges on the optimization of their photolithographic processes. Yet, we find few companies that have taken the time to properly characterize this critical operation; they are sitting in the "passenger's seat", waiting to see what will come out, hoping that the yields will improve someday. There is no "black magic" involved in setting up a process at its optimum conditions (i.e. minimum sensitivity to all variables at the same time). This paper gives an example of a real world situation for optimizing a photolithographic process by the use of a properly designed experiment, followed by adequate multidimensional analysis of the data. Basic SPC practices like plotting control charts will not, by themselves, improve yields; the control charts are, however, among the necessary tools used in the determination of the process capability and in the formulation of the problems to be addressed. The example we shall consider is the twofold objective of shifting the process average, while tightening the variance, of polysilicon line widths. This goal was identified from a Pareto analysis of yield-limiting mechanisms, plus inspection of the control charts. A key issue in a characterization of this type of process is the number of interactions between variables; this example rules out two-level full factorial and three-level fractional factorial designs (which cannot detect all of the interactions). We arrive at an experiment with five factors at five levels each. A full factorial design for five factors at three levels would require 3125 wafers. Instead, we will use a design that allows us to run this experiment with only 25 wafers, for a significant reduction in time, materials and manufacturing interruption in order to complete the

  6. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  7. Study on interaction between palladium(ІІ)-Linezolid chelate with eosin by resonance Rayleigh scattering, second order of scattering and frequency doubling scattering methods using Taguchi orthogonal array design.

    PubMed

    Thakkar, Disha; Gevriya, Bhavesh; Mashru, R C

    2014-03-25

    Linezolid reacted with palladium to form 1:1 binary cationic chelate which further reacted with eosin dye to form 1:1 ternary ion association complex at pH 4 of Walpole's acetate buffer in the presence of methyl cellulose. As a result not only absorption spectra were changed but Resonance Rayleigh Scattering (RRS), Second-order Scattering (SOS) and Frequency Doubling Scattering (FDS) intensities were greatly enhanced. The analytical wavelengths of RRS, SOS and FDS (λex/λem) of ternary complex were located at 538 nm/538nm, 240 nm/480 nm and 660 nm/330 nm, respectively. The linearity range for RRS, SOS and FDS methods were 0.01-0.5 μg mL(-1), 0.1-2 μg mL(-1) and 0.2-1.8 μg mL(-1), respectively. The sensitivity order of three methods was as RRS>SOS>FDS. Accuracy of all methods were determined by recovery studies and showed recovery between 98% and 102%. Intraday and inter day precision were checked for all methods and %RSD was found to be less than 2 for all methods. The effects of foreign substances were tested on RRS method and it showed the method had good selectivity. For optimization of process parameter, Taguchi orthogonal array design L8(2(4)) was used and ANOVA was adopted to determine the statistically significant control factors that affect the scattering intensities of methods. The reaction mechanism, composition of ternary ion association complex and reasons for scattering intensity enhancement was discussed in this work.

  8. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  9. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  10. Multi-response optimization in the development of oleo-hydrophobic cotton fabric using Taguchi based grey relational analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Naseer; Kamal, Shahid; Raza, Zulfiqar Ali; Hussain, Tanveer; Anwar, Faiza

    2016-03-01

    Present study under takes multi-response optimization of water and oil repellent finishing of bleached cotton fabric under Taguchi based grey relational analysis. We considered three input variables, viz. concentrations of the finish (Oleophobol CP-C) and cross linking agent (Knittex FEL), and curing temperature. The responses included: water and oil contact angles, air permeability, crease recovery angle, stiffness, and tear and tensile strengths of the finished fabric. The experiments were conducted under L9 orthogonal array in Taguchi design. The grey relational analysis was also included to set the quality characteristics as reference sequence and to decide the optimal parameter combinations. Additionally, the analysis of variance was employed to determine the most significant factor. The results demonstrate great improvement in the desired quality parameters of the developed fabric. The optimization approach reported in this study could be effectively used to reduce expensive trial and error experimentation for new product development and process optimization involving multiple responses. The product optimized in this study was characterized by using advanced analytical techniques, and has potential applications in rainwear and other outdoor apparel.

  11. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  12. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  13. Application of Taguchi Method to Investigate the Effects of Process Factors on the Production of Industrial Piroxicam Polymorphs and Optimization of Dissolution Rate of Powder.

    PubMed

    Shahbazian, Alen; Davood, Asghar; Dabirsiaghi, Alireza

    2016-01-01

    Piroxicam has two different crystalline forms (known as needle and cubic forms), that they are different in physicochemical properties such as biological solubility. In the current research, using Taguchi experimental design approach the influences of five operating variables on formation of the piroxicam polymorph shapes in recrystallization were studied. The variables include type of solvent, cooling methods, type of mixture paddle, pH, and agitator speed. Statistical analysis of results revealed the significance order of factors affecting the product quality and quantity. At first using the Taguchi experimental method, the influence of process factors on the yield, particle size and dissolution rate of piroxicam powder was statistically investigated. The optimum conditions to achieve the best dissolution rate of piroxicam were determined experimentally. The results were analyzed using Qualitek4 software and it was revealed that the type of solvent and method of cooling respectively are the most important factors that affect the dissolution rate. It was also experimentally achieved that some factors such as type of agitator paddle, pH and agitation rate have no significant effects on dissolution rate.

  14. Application of Taguchi Method to Investigate the Effects of Process Factors on the Production of Industrial Piroxicam Polymorphs and Optimization of Dissolution Rate of Powder

    PubMed Central

    Shahbazian, Alen; Davood, Asghar; Dabirsiaghi, Alireza

    2016-01-01

    Piroxicam has two different crystalline forms (known as needle and cubic forms), that they are different in physicochemical properties such as biological solubility. In the current research, using Taguchi experimental design approach the influences of five operating variables on formation of the piroxicam polymorph shapes in recrystallization were studied. The variables include type of solvent, cooling methods, type of mixture paddle, pH, and agitator speed. Statistical analysis of results revealed the significance order of factors affecting the product quality and quantity. At first using the Taguchi experimental method, the influence of process factors on the yield, particle size and dissolution rate of piroxicam powder was statistically investigated. The optimum conditions to achieve the best dissolution rate of piroxicam were determined experimentally. The results were analyzed using Qualitek4 software and it was revealed that the type of solvent and method of cooling respectively are the most important factors that affect the dissolution rate. It was also experimentally achieved that some factors such as type of agitator paddle, pH and agitation rate have no significant effects on dissolution rate. PMID:27642310

  15. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  16. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  17. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  18. Adsorption of cefixime from aqueous solutions using modified hardened paste of Portland cement by perlite; optimization by Taguchi method.

    PubMed

    Rasoulifard, Mohammad Hossein; Khanmohammadi, Soghra; Heidari, Azam

    In the present study, we have used a simple and cost-effective removal technique by a commercially available Fe-Al-SiO2 containing complex material (hardened paste of Portland cement (HPPC)). The adsorbing performance of HPPC and modified HPPC with perlite for removal of cefixime from aqueous solutions was investigated comparatively by using batch adsorption studies. HPPC has been selected because of the main advantages such as high efficiency, simple separation of sludge, low-cost and abundant availability. A Taguchi orthogonal array experimental design with an OA16 (4(5)) matrix was employed to optimize the affecting factors of adsorbate concentration, adsorbent dosage, type of adsorbent, contact time and pH. On the basis of equilibrium adsorption data, Langmuir, Freundlich and Temkin adsorption isotherm models were also confirmed. The results showed that HPPC and modified HPPC were both efficient adsorbents for cefixime removal.

  19. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  20. Sequential experimental design based generalised ANOVA

    SciTech Connect

    Chakraborty, Souvik Chowdhury, Rajib

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  1. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  2. Determination of the optimal time and cost of manufacturing flow of an assembly using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Petrila, S.; Brabie, G.; Chirita, B.

    2016-08-01

    The optimization of the parts and assembly manufacturing operation was carried out in order to minimize both the time and cost of production as appropriate. The optimization was made by using the Taguchi method. The Taguchi method is based on the plans of experiences that vary the input and outputs factors. The application of the Taguchi method in order to optimize the flow of the analyzed assembly production is made in the following: to find the optimal combination between the manufacturing operations; to choose the variant involving the use of equipment performance; to delivery operations based on automation. The final aim of the Taguchi method application is that the entire assembly to be achieved at minimum cost and in a short time. Philosophy Taguchi method of optimizing product quality is synthesized from three basic concepts: quality must be designed into the product and not he product inspected after it has been manufactured; the higher quality is obtained when the deviation from the proposed target is low or when uncontrollable factors action has no influence on it, which translates robustness; costs entailed quality are expressed as a function of deviation from the nominal value [1]. When determining the number of experiments involving the study of a phenomenon by this method, follow more restrictive conditions [2].

  3. More efficiency in fuel consumption using gearbox optimization based on Taguchi method

    NASA Astrophysics Data System (ADS)

    Goharimanesh, Masoud; Akbari, Aliakbar; Akbarzadeh Tootoonchi, Alireza

    2014-05-01

    Automotive emission is becoming a critical threat to today's human health. Many researchers are studying engine designs leading to less fuel consumption. Gearbox selection plays a key role in an engine design. In this study, Taguchi quality engineering method is employed, and optimum gear ratios in a five speed gear box is obtained. A table of various gear ratios is suggested by design of experiment techniques. Fuel consumption is calculated through simulating the corresponding combustion dynamics model. Using a 95 % confidence level, optimal parameter combinations are determined using the Taguchi method. The level of importance of the parameters on the fuel efficiency is resolved using the analysis of signal-to-noise ratio as well as analysis of variance.

  4. Human Factors Experimental Design and Analysis Reference

    DTIC Science & Technology

    2007-07-01

    and R2Adj – PRESS Statistic – Mallows C(p) A linear regression model that includes all predictors investigated may not be the best model in terms of...as the Adjusted Coefficient of Determination, R2Adj, the PRESS statistic, and Mallows C(p) value. Human Factors Experimental Design and Analysis...equations with highest R2 using R2Adj, PRESS, and Mallows C(p) • Evaluation – Cumbersome as number of X’s increase 10 X’s = (210-1) = 1,023 Regression

  5. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2016-12-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio (S/N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  6. Optimization of laccase production by Pleurotus ostreatus IMI 395545 using the Taguchi DOE methodology.

    PubMed

    Periasamy, Rathinasamy; Palvannan, Thayumanavan

    2010-12-01

    Production of laccase using a submerged culture of Pleurotus orstreatus IMI 395545 was optimized by the Taguchi orthogonal array (OA) design of experiments (DOE) methodology. This approach facilitates the study of the interactions of a large number of variables spanned by factors and their settings, with a small number of experiments, leading to considerable savings in time and cost for process optimization. This methodology optimizes the number of impact factors and enables to calculate their interaction in the production of industrial enzymes. Eight factors, viz. glucose, yeast extract, malt extract, inoculum, mineral solution, inducer (1 mM CuSO₄) and amino acid (l-asparagine) at three levels and pH at two levels, with an OA layout of L18 (2¹ × 3⁷) were selected for the proposed experimental design. The laccase yield obtained from the 18 sets of fermentation experiments performed with the selected factors and levels was further processed with Qualitek-4 software. The optimized conditions shared an enhanced laccase expression of 86.8% (from 485.0 to 906.3 U). The combination of factors was further validated for laccase production and reactive blue 221 decolorization. The results revealed an enhanced laccase yield of 32.6% and dye decolorization up to 84.6%. This methodology allows the complete evaluation of main and interaction factors.

  7. Using Design of Experiments and Response Surface Methodology as an Approach to Understand and Optimize Operational Air Power

    DTIC Science & Technology

    2010-06-01

    experimental design is orthogonal, it is possible to separate the effect of each parameter ( Bryne & Taguchi, 1986). The average weights for each factor...pp. 1-14. Box G.E. and Draper N.R. (1987) Empirical Model Building and Response Surfaces, John Wiley, New York, 1987. Bryne D. M. and

  8. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  9. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  10. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  11. Optimization on Impact Strength of Woven Kenaf Reinforced Polyester Composites using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Khalid, S. N. A.; Ismail, A. E.; Zainulabidin, M. H.

    2017-01-01

    This paper focuses on the effect of weaving patterns and orientations on the energy absorption of woven kenaf reinforced polyester composites. Kenaf fiber in the form of yarn is weaved to produce different weaving patterns such as plain, twill and basket. Three woven mats are stacked together and mixed with polyester resin before it is compressed to squeeze out any excessive resin. There is nine different orientations are used during stacking processes by following Taguchi orthogonal arrays method. The hardened composites are cured for 24 hours before it is shaped according to specific dimensions for impact tests. The composites are perforated with 1m/s blunted projectile. According to the experimental findings, weaving pattern and orientation have distinct potential effects on value of energy absorption. The optimization using Taguchi method reveals preferable orientation of each weaving pattern composites. Based on the fracture observation, the fragmentations after optimization indicating lower distance surface fracture perforated obtained.

  12. Taguchi Optimization on the Initial Thickness and Pre-aging of Nano-/Ultrafine-Grained Al-0.2 wt.%Sc Alloy Produced by ARB

    NASA Astrophysics Data System (ADS)

    Yousefieh, Mohammad; Tamizifar, Morteza; Boutorabi, Seyed Mohammad Ali; Borhani, Ehsan

    2016-10-01

    In this study, Taguchi design method with L9 orthogonal array has been used to optimize the initial thickness and pre-aging parameters (temperature and time) for the mechanical properties of Al-0.2 wt.% Sc alloy heavily deformed by accumulative roll bonding (ARB) up to ten cycles. Analysis of variance was performed on the measured data and signal-to-noise ratios. It was found that the pre-aging temperature has the most significant parameter affecting the mechanical properties by percentage contribution of 64.51%. Pre-aging time (19.29%) has the next most significant effect, while initial thickness (5.31%) has statistically less significant effect. In order to confirm experimental conclusions, verification experiments were carried out at optimum working conditions. Under these conditions, the yield strength was 6.51 times higher and toughness was 6.86% lower compared with the starting Al-Sc material. Moreover, mean grain size was decreased to 220 nm by setting the control parameters, which was the lowest value obtained in this study. It was concluded that the Taguchi method was found to be a promising technique to obtain the optimum conditions for such studies. Consequently, by controlling the parameter levels, the high-strength and high-toughness Al-Sc samples were fabricated through pre-aging and subsequent ARB process.

  13. Surface Roughness Prediction Model using Zirconia Toughened Alumina (ZTA) Turning Inserts: Taguchi Method and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Mandal, Nilrudra; Doloi, Biswanath; Mondal, Biswanath

    2016-01-01

    In the present study, an attempt has been made to apply the Taguchi parameter design method and regression analysis for optimizing the cutting conditions on surface finish while machining AISI 4340 steel with the help of the newly developed yttria based Zirconia Toughened Alumina (ZTA) inserts. These inserts are prepared through wet chemical co-precipitation route followed by powder metallurgy process. Experiments have been carried out based on an orthogonal array L9 with three parameters (cutting speed, depth of cut and feed rate) at three levels (low, medium and high). Based on the mean response and signal to noise ratio (SNR), the best optimal cutting condition has been arrived at A3B1C1 i.e. cutting speed is 420 m/min, depth of cut is 0.5 mm and feed rate is 0.12 m/min considering the condition smaller is the better approach. Analysis of Variance (ANOVA) is applied to find out the significance and percentage contribution of each parameter. The mathematical model of surface roughness has been developed using regression analysis as a function of the above mentioned independent variables. The predicted values from the developed model and experimental values are found to be very close to each other justifying the significance of the model. A confirmation run has been carried out with 95 % confidence level to verify the optimized result and the values obtained are within the prescribed limit.

  14. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  15. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  16. Comparison of the experimental aerodynamic characteristics of theoretically and experimentally designed supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1974-01-01

    A lifting airfoil theoretically designed for shockless supercritical flow utilizing a complex hodograph method has been evaluated in the Langley 8-foot transonic pressure tunnel at design and off-design conditions. The experimental results are presented and compared with those of an experimentally designed supercritical airfoil which were obtained in the same tunnel.

  17. Optimizing quality of digital mammographic imaging using Taguchi analysis with an ACR accreditation phantom.

    PubMed

    Chen, Ching-Yuan; Pan, Lung-Fa; Chiang, Fu-Tsai; Yeh, Da-Ming; Pan, Lung-Kwang

    2016-07-03

    This work demonstrated the improvement of the visualization of lesions by modulating the factors of an X-ray mammography imaging system using Taguchi analysis. Optimal combinations of X-ray operating factors in each group of level combination were determined using the Taguchi method, in which all factors were organized into only 18 groups, yielding analytical results with the same confidence as if each factor had been examined independently. The 4 considered operating factors of the X-ray machine were (1) anode material (target), (2) kVp, (3) mAs and (4) field of view (FOV). Each of these factors had 2 or 3 levels. Therefore, 54 (2×3×3×3 = 54) combinations were generated. The optimal settings were Rh as the target, 28 kVp, 80 mAs and 19×23 cm(2) FOV. The grade of exposed mammographic phantom image increased from the automatic exposure control (AEC) setting 70.92 to 72.00 under the optimal setting, meeting the minimum standard (70.00) set by Taiwan's Department of Health. The average glandular dose (AGD) of the exposed phantom, 0.182 cGy, was lower than that, 0.203 cGy, under the AEC setting. The Taguchi method was extremely promising for the design of imaging protocols in clinical diagnosis.

  18. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  19. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  20. Application of Taguchi's method to optimize fiber Raman amplifier

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif

    2016-04-01

    Taguchi's method is introduced to perform multiobjective optimization of fiber Raman amplifier (FRA). The optimization requirements are to maximize gain and keep gain ripple minimum over the operating bandwidth of a wavelength division multiplexed (WDM) communication link. Mathematical formulations of FRA and corresponding numerical solution techniques are discussed. A general description of Taguchi's method and how it can be integrated with the FRA optimization problem are presented. The proposed method is used to optimize two different configurations of FRA. The performance of Taguchi's method is compared with genetic algorithm and particle swarm optimization in terms of output performance and convergence rate. Taguchi's method is found to produce good results with fast convergence rate, which makes it well suited for the nonlinear optimization problems.

  1. Application of Taguchi based Response Surface Method (TRSM) for Optimization of Multi Responses in Drilling Al/SiC/Al2O3 Hybrid Composite

    NASA Astrophysics Data System (ADS)

    Adalarasan, R.; Santhanakumar, M.

    2015-01-01

    The emerging industrial applications of second generation hybrid composites demand an organised study of their drilling characteristics as drilling is an essential metal removal process in the final fabrication stage. In the present work, surface finish and burr height were observed while drilling Al6061/SiC/Al2O3 composite for various combinations of drilling parameters like the feed rate, spindle speed and point angle of tool. The experimental trials were designed by L18 orthogonal array and Taguchi based response surface method was presented for optimizing the drilling parameters. The significant improvements in the responses observed for the optimal parameter setting has validated the TRSM approach permitting its application in other areas of manufacturing.

  2. Preconcentration and determination of ceftazidime in real samples using dispersive liquid-liquid microextraction and high-performance liquid chromatography with the aid of experimental design.

    PubMed

    Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin

    2016-11-01

    A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved.

  3. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  4. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  5. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  6. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  7. Teaching Experimental Design to Elementary School Pupils in Greece

    ERIC Educational Resources Information Center

    Karampelas, Konstantinos

    2016-01-01

    This research is a study about the possibility to promote experimental design skills to elementary school pupils. Experimental design and the experiment process are foundational elements in current approaches to Science Teaching, as they provide learners with profound understanding about knowledge construction and science inquiry. The research was…

  8. Optimization of parameters for the synthesis of Y2Cu2O5 nanoparticles by Taguchi method and comparison of their magnetic and optical properties with their bulk counterpart

    NASA Astrophysics Data System (ADS)

    Farbod, Mansoor; Rafati, Zahra; Shoushtari, Morteza Zargar

    2016-06-01

    Y2Cu2O5 nanoparticles were synthesized by sol-gel combustion method and effects of different factors on the size of nanoparticles were investigated. In order to reduce the experimental stages, Taguchi robust design method was employed. Acid citric:Cu+2 M ratio, pH, sintering temperature and time were chosen as the parameters for optimization. Among these factors the solution pH had the most influence and the others had nearly the same influence on the nanoparticles sizes. Based on the predicted conditions by Taguchi design, the sample with a minimum particle size of 47 nm was prepared. The magnetic behavior of Y2Cu2O5 nanoparticles were measured and found that at low fields they are soft ferromagnetic but at high fields they behave paramagnetically. The magnetic behavior of nanoparticles were compared to their bulk counterparts and found that the Mr of the samples was slightly different, but the Hc of the nanoparticles was 76% of the bulk sample. The maximum absorbance peak of UV-vis spectrum showed a blue shift for the smaller particles.

  9. [Treatment of bedsores--combination of therapies depended the experimental design method].

    PubMed

    Miyaji, Hiroko; Sakurai, Hirofumi; Kikawada, Masayuki; Yamaguchi, Katsuhiko; Kimura, Akihiro; Fujiwara, Takayuki; Imada, Nobuo; Imai, Mihoko; Iwamoto, Toshihiko; Takasaki, Masaru

    2005-01-01

    The treatment of bedsores is a particular problem in geriatric medicine. We selected standard drugs that may be effective for the decubitus ulcer, and investigated combination therapy to develop efficient treatment The subjects were 16 patients in whom the grade of the bedsore was evaluated as II to IV according to the Shea's depth classification. Treatment was performed while all patients were on air mats. We selected drugs and treatment methods based on the previously established experimental design of Taguchi. Based on this, we created and adapted 16 different component combination treatment programs in accordance with the L16 rectangular cross table. The following component factors were adopted: A: types of covering substances on the wound surface (Elase ointment, isodine sugar, isodine gel solcoseryl ointment); B: Isalopan powder; C: Spray of 10 ml physiological saline containing 500 microg of prostaglandin (concentration 0.005%); D: daily number of treatments; and F: presence or absence of tapping. We serially measured the wound surface area as an index of the speed of wound healing, and measured the interval (day) until the area decreased to one half of the original size (T1/2, half life). We analyzed data on one combination treatment each in 16 patients. Analysis of variance of the above factors showed significant F values for factors A, B, D and F. The contribution rates for factors A, B, D and F were 37.84%, 8.47%, 14.98% and 13.81%, respectively. The error term (e) was 16.37%. Optimal results were seen in the groups in which solcoseryl ointment had been applied twice a day. In this study, prostaglandin, which had been anticipated to be effective, did not show any effects. The error term (e) suggests the presence of other healing factors including individual differences. Concerning this point, it well be necessary to examine a larger number of patients in the future. With ointment treatment alone, without using an air mat, it was confirmed that bedsore

  10. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  11. Irradiation Design for an Experimental Murine Model

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  12. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  13. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  14. Optimization of Binder Jetting Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Shrestha, Sanjay; Manogharan, Guha

    2017-01-01

    Among several additive manufacturing (AM) methods, binder-jetting has undergone a recent advancement in its ability to process metal powders through selective deposition of binders on a powder bed followed by curing, sintering, and infiltration. This study analyzes the impact of various process parameters in binder jetting on mechanical properties of sintered AM metal parts. The Taguchi optimization method has been employed to determine the optimum AM parameters to improve transverse rupture strength (TRS), specifically: binder saturation, layer thickness, roll speed, and feed-to-powder ratio. The effects of the selected process parameters on the TRS performance of sintered SS 316L samples are studied with the American Society of Testing Materials (ASTM) standard test method. It was found that binder saturation and feed-to-powder ratio were the most critical parameters, which reflects the strong influence of binder powder interaction and density of powder bed on resulting mechanical properties. This article serves as an aid in understanding the optimum process parameters for binder jetting of SS 316L.

  15. Optimization of Binder Jetting Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Shrestha, Sanjay; Manogharan, Guha

    2017-03-01

    Among several additive manufacturing (AM) methods, binder-jetting has undergone a recent advancement in its ability to process metal powders through selective deposition of binders on a powder bed followed by curing, sintering, and infiltration. This study analyzes the impact of various process parameters in binder jetting on mechanical properties of sintered AM metal parts. The Taguchi optimization method has been employed to determine the optimum AM parameters to improve transverse rupture strength (TRS), specifically: binder saturation, layer thickness, roll speed, and feed-to-powder ratio. The effects of the selected process parameters on the TRS performance of sintered SS 316L samples are studied with the American Society of Testing Materials (ASTM) standard test method. It was found that binder saturation and feed-to-powder ratio were the most critical parameters, which reflects the strong influence of binder powder interaction and density of powder bed on resulting mechanical properties. This article serves as an aid in understanding the optimum process parameters for binder jetting of SS 316L.

  16. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  17. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  18. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  19. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  20. Laser Doppler vibrometer: unique use of DOE/Taguchi methodologies in the arena of pyroshock (10 to 100,000 HZ) response spectrum

    NASA Astrophysics Data System (ADS)

    Litz, C. J., Jr.

    1994-09-01

    Discussed is the unique application of design of experiment (DOE) to structure and test a Taguchi L9 (32) factorial experimental matrix (nine tests to study two factors, each factor at three levels), utilizing an HeNe laser Doppler vibrometer and piezocrystal accelerometers to monitor the explosively induced vibrations through the frequency range of 10 to 105 Hz on a flat steel plate (96 X 48 X 0.25 in.). An initial discussion is presented of pyrotechnic shock, or pyroshock, which is a short-duration, high-amplitude, high-frequency transient structural response in aerospace vehicle structures following firing of an ordnance item to separate, sever missile skin, or release a structural member. The development of the shock response spectra (SRS) is detailed. The use of a laser doppler for generating velocity- acceleration-time histories near and at a separation distance from the explosive and the resulting generated shock response spectra plots is detailed together with the laser doppler vibrometer setup as used. The use of DOE/Taguchi as a means of generating performance metrics, prediction equations, and response surface plots is presented as a means to statistically compare and rate the performance of the NeHe laser Doppler vibrometer with respect to two different piezoelectric crystal accelerometers of the contact type mounted directly to the test plate at the frequencies in the 300, 3000, and 10,000 Hz range. Specific constructive conclusions and recommendations are presented on the totally new dimension of understanding the pyroshock phenomenon with respect to the effects and interrelationships of explosive charge weight, location, and the laser Doppler recording system. The use of these valuable statistical tools on other experiments can be cost-effective and provide valuable insight to aid understanding of testing or process control by the engineering community. The superiority of the HeNe laser Doppler vibrometer performance is demonstrated.

  1. Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using taguchi approach

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.

    2014-07-01

    In this study, ZnO nanoparticles were prepared by a sol-gel method for the first time. Taguchi method was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.

  2. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production.

  3. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  4. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  5. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  6. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  7. Inulinase production by Geotrichum candidum OC-7 using migratory locusts as a new substrate and optimization process with Taguchi DOE.

    PubMed

    Canli, Ozden; Tasar, Gani Erhan; Taskin, Mesut

    2013-09-01

    Utilization of migratory locusts (Locusta migratoria) as a main substrate due to its high protein content for inulinase (2,1-β-d-fructan fructanohydrolase) production by Geotrichum candidum OC-7 was investigated in this study. To optimize fermentation conditions, four influential factors (locust powder (LP) concentration, sucrose concentration, pH and fermentation time) at three levels were investigated using Taguchi orthogonal array (OA) design of experiment (DOE). Inulinase yield obtained from the designed experiments with regard to Taguchi L9 OA was processed with Minitab 15 software at 'larger is better' as quality character. The results showed that optimal fermentation conditions determined as LP 30 g/l, sucrose 20 g/l, pH 6.0 and time 48 h. Maximum inulinase activity was recorded as 30.12 U/ml, which was closer to the predicted value (30.56 U/ml). To verify the results, analysis of variance test was employed. LP had the greatest contribution (71.96%) among the other factors. Sucrose had lower contribution (13.96%) than LP. This result demonstrated that LP had a strong effect on inulinase activity and can be used for enzyme production. Taguchi DOE application enhanced enzyme activity to about 3.05-fold versus unoptimized condition and 2.34-fold versus control medium. Consequently, higher inulinase production can be achieved by the utilization of an edible insect material as an alternative substrate and Taguchi DOE presents suitable optimization method for biotechnological process.

  8. Optimization of Nanostructuring Burnishing Technological Parameters by Taguchi Method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, V. P.; Dmitriev, A. I.; Anisimova, G. S.; Semenova, Yu V.

    2016-04-01

    On the basis of application of Taguchi optimization method, an approach for researching influence of nanostructuring burnishing technological parameters, considering the surface layer microhardness criterion, is developed. Optimal values of burnishing force, feed and number of tool passes for hardened steel AISI 420 hardening treatment are defined.

  9. Identification of Dysfunctional Cooperative Learning Teams Using Taguchi Quality Indexes

    ERIC Educational Resources Information Center

    Hsiung, Chin-Min

    2011-01-01

    In this study, dysfunctional cooperative learning teams are identified by comparing the Taguchi "larger-the-better" quality index for the academic achievement of students in a cooperative learning condition with that of students in an individualistic learning condition. In performing the experiments, 42 sophomore mechanical engineering…

  10. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-06

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation.

  11. Application of the Taguchi method to the analysis of the deposition step in microarray production.

    PubMed

    Severgnini, Marco; Pattini, Linda; Consolandi, Clarissa; Rizzi, Ermanno; Battaglia, Cristina; De Bellis, Gianluca; Cerutti, Sergio

    2006-09-01

    Every microarray experiment is affected by many possible sources of variability that may even corrupt biological evidence on analyzed sequences. We applied a "Taguchi method" strategy, based on the use of orthogonal arrays to optimize the deposition step of oligonucleotide sequences on glass slides. We chose three critical deposition parameters (humidity, surface, and buffer) at two levels each, in order to establish optimum settings. A L8 orthogonal array was used in order to monitor both the main effects and interactions on the deposition of a 25 mer oligonucleotide hybridized to its fluorescent-labeled complementary. Signal-background ratio and deposition homogeneity in terms of mean intensity and spot diameter were considered as significant outputs. An analysis of variance (ANOVA) was applied to raw data and to mean results for each slide and experimental run. Finally we calculated an overall evaluation coefficient to group together important outputs in one number. Environmental humidity and surface-buffer interaction were recognized as the most critical factors, for which a 50% humidity, associated to a chitosan-covered slide and a sodium phosphate + 25% dimethyl sulfoxide (DMSO) buffer gave best performances. Our results also suggested that Taguchi methods can be efficiently applied in optimization of microarray procedures.

  12. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  13. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  14. Reliability of single sample experimental designs: comfortable effort level.

    PubMed

    Brown, W S; Morris, R J; DeGroot, T; Murry, T

    1998-12-01

    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  15. Anaerobic treatment of complex chemical wastewater in a sequencing batch biofilm reactor: process optimization and evaluation of factor interactions using the Taguchi dynamic DOE methodology.

    PubMed

    Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N

    2005-06-20

    The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and

  16. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER.

  17. Color removal from textile dyebath effluents in a zeolite fixed bed reactor: determination of optimum process conditions using Taguchi method.

    PubMed

    Engin, Ahmet Baki; Ozdemir, Ozgür; Turan, Mustafa; Turan, Abdullah Z

    2008-11-30

    Taguchi method was applied as an experimental design to determine optimum conditions for color removal from textile dyebath house effluents in a zeolite fixed bed reactor. After the parameters were determined to treat real textile wastewater, adsorption experiments were carried out. The breakthrough curves for adsorption studies were constructed under different conditions by plotting the normalized effluent color intensity (C/C(0)) versus time (min) or bed volumes (BV). The chosen experimental parameters and their ranges are: HTAB concentration (C(htab)), 1-7.5 gL(-1); HTAB feeding flowrate (Q(htab)), 0.015-0.075 L min(-1); textile wastewater flowrate (Q(dye)), 0.025-0.050 L min(-1) and zeolite bed height (H(bed)), 25-50 cm, respectively. Mixed orthogonal array L(16) (4(2)x2(2)) for experimental plan and the larger the better response category were selected to determine the optimum conditions. The optimum conditions were found to be as follows: HTAB concentration (C(htab))=1g L(-1), HTAB feeding flowrate (Q(htab))=0.015 L min(-1), textile wastewater flowrate (Q(dye))=0.025 L min(-1) and bed height (H(bed))=50 cm. Under these conditions, the treated wastewater volume reached a maximum while the bed volumes (BV) were about 217. While HTAB concentration, gL(-1) (A); zeolite bed height, cm (D) and wastewater flowrate, L min(-1) (C) were found to be significant parameters, respectively, whereas, HTAB flowrate, L min(-1) (B) was found to be an insignificant parameter.

  18. Multiresponse Optimization of Laser Cladding Steel + VC Using Grey Relational Analysis in the Taguchi Method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Kovacevic, Radovan

    2016-07-01

    Laser cladding of metal matrix composite coatings (MMCs) has become an effective and economic method to improve the wear resistance of mechanical components. The clad quality characteristics such as clad height, carbide fraction, carbide dissolution, and matrix hardness in MMCs determine the wear resistance of the coatings. These clad quality characteristics are influenced greatly by the laser cladding processing parameters. In this study, American Iron and Steel Institute (AISI) 420 + 20% vanadium carbide (VC) was deposited on mild steel with a high powder direct diode laser. The Taguchi-based Grey relational method was used to optimize the laser cladding processing parameters (laser power, scanning speed, and powder feed rate) with the consideration of multiple clad characteristics related to wear resistance (clad height, carbide volume fraction, and Fe-matrix hardness). A Taguchi L9 orthogonal array was designed to study the effects of processing parameters on each response. The contribution and significance of each processing parameter on each clad characteristic were investigated by the analysis of variance (ANOVA). The Grey relational grade acquired from Grey relational analysis was used as the performance characteristic to obtain the optimal combination of processing parameters. Based on the optimal processing parameters, the phases and microstructure of the laser-cladded coating were characterized by using x-ray diffraction (XRD) and scanning electron microscopy (SEM) with energy-dispersive spectroscopy (EDS).

  19. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-05-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  20. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  1. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  2. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  3. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  4. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  5. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  6. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  7. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  8. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  9. Optimization of wire Electrical Discharge turning operations using robust design of experiment

    NASA Astrophysics Data System (ADS)

    Mohammadi, Aminollah; Fadaei Tehrani, Alireza; Safari, Mahdi

    2011-01-01

    In the present study a multi response optimization method using Taguchi's robust design approach is proposed for wire electrical discharge turning (WEDT) operations. Experimentation was planned as per Taguchi's L18 orthogonal array. Each experiment has been performed under different machining conditions of power, servo, voltage, pulse off time, wire tension, wire feed speed, and rotational speed. Three responses namely material removal rate (MRR), surface roughness, and roundness have been considered for each experiment. The machining parameters are optimized with the multi response characteristics of the material removal rate, surface roughness, and roundness. Multi response S/N (MRSN) ratio is applied to measure the performance characteristics deviating from the actual value. Analysis of variance (ANOVA) is employed to identify the level of importance of the machining parameters on the multiple performance considered characteristics. Finally experimental confirmation was carried out to identify the effectiveness of this proposed method.

  10. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  11. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  12. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  13. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  14. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  15. Fabrication optimisation of carbon fiber electrode with Taguchi method.

    PubMed

    Cheng, Ching-Ching; Young, Ming-Shing; Chuang, Chang-Lin; Chang, Ching-Chang

    2003-07-01

    In this study, we describe an optimised procedure for fabricating carbon fiber electrodes using Taguchi quality engineering method (TQEM). The preliminary results show a S/N ratio improvement from 22 to 30 db (decibel). The optimised parameter was tested by using a glass micropipette (0.3 mm outer/2.5 mm inner length of carbon fiber) dipped into PBS solution under 2.9 V triangle-wave electrochemical processing for 15 s, followed by coating treatment of micropipette on 2.6 V DC for 45 s in 5% Nafion solution. It is thus shown that Taguchi process optimisation can improve cost, manufacture time and quality of carbon fiber electrodes.

  16. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  17. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  18. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  19. Influence of process parameters on the content of biomimetic calcium phosphate coating on titanium: a Taguchi analysis.

    PubMed

    Thammarakcharoen, Faungchat; Suvannapruk, Waraporn; Suwanprateeb, Jintamai

    2014-10-01

    In this study, a statistical design of experimental methodology based on Taguchi orthogonal design has been used to study the effect of various processing parameters on the amount of calcium phosphate coating produced by such technique. Seven control factors with three levels each including sodium hydroxide concentration, pretreatment temperature, pretreatment time, cleaning method, coating time, coating temperature and surface area to solution volume ratio were studied. X-ray diffraction revealed that all the coatings consisted of the mixture of octacalcium phosphate (OCP) and hydroxyapatite (HA) and the presence of each phase depended on the process conditions used. Various content and size (-1-100 μm) of isolated spheroid particles with nanosized plate-like morphology deposited on the titanium surface or a continuous layer of plate-like nanocrystals having the plate thickness in the range of -100-300 nm and the plate width in the range of 3-8 μm were formed depending on the process conditions employed. The optimum condition of using sodium hydroxide concentration of 1 M, pretreatment temperature of 70 degrees C, pretreatment time of 24 h, cleaning by ultrasonic, coating time of 6 h, coating temperature of 50 degrees C and surface area to solution volume ratio of 32.74 for producing the greatest amount of the coating formed on the titanium surface was predicted and validated. In addition, coating temperature was found to be the dominant factor with the greatest contribution to the coating formation while coating time and cleaning method were significant factors. Other factors had negligible effects on the coating performance.

  20. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  1. Preliminary Findings: Design of Experiments University XXI (FY05) Research and Support for the US Army

    DTIC Science & Technology

    2007-03-01

    Taguchi methods . OTC primarily uses Taguchi and OFAT type methods in its testing methods and designs. A fundamental concept of DOE is determining which...allows for the interaction of variables. Whereas identifying interaction is a pronounced strength of DOE, most other methods— including the Taguchi ... method and one factor at a time (OFAT)—cannot calculate variable interactions. Variable interactions are statistically attributed as a main effect of

  2. Study on surface finish of AISI 2080 steel based on the Taguchi method

    NASA Astrophysics Data System (ADS)

    Yalcinkaya, S.; Şahin, Y.

    2017-02-01

    Surface finish and dimensional accuracy play a vital role in manufacturing engineering applications. Grinding is one of the most important methods for producing a better surface quality. This paper describes a study of the influences of cutting parameters such as table speed, depth of cut and feed rate on surface finish of AISI 2080 steels, based on the Taguchi (L27) method. The experimental results showed that the table speed was the machining parameter, which had a greater effect on the surface finish, followed by depth of cut, whereas feed rate showed no significant effect. Analysis of variance indicated that a better surface finish was obtained at 190 m/min speed, 0.003 mm depth of cut and 0.08 mm/rev feed rate.

  3. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  4. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  5. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  6. Design and Experimental Results for the S414 Airfoil

    DTIC Science & Technology

    2010-08-01

    of most current general-aviation aircraft, including busi - ness jets , as well as unmanned aerial vehicles and all sailplanes. It does, however...RDECOM TR 10-D-112 U.S. ARMY RESEARCH, DEVELOPMENT AND ENGINEERING COMMAND TITLE: Design and Experimental Results for the S414 Airfoil AUTHOR: Dan M...Somers and Mark D. Maughmer COMPANY NAME: Airfoils , Incorporated COMPANY ADDRESS: 122 Rose Drive Port Matilda PA 16870-7535 DATE: August 2010 FINAL

  7. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  8. Experimental Design for Combinatorial and High Throughput Materials Development

    NASA Astrophysics Data System (ADS)

    Cawse, James N.

    2002-12-01

    In the past decade, combinatorial and high throughput experimental methods have revolutionized the pharmaceutical industry, allowing researchers to conduct more experiments in a week than was previously possible in a year. Now high throughput experimentation is rapidly spreading from its origins in the pharmaceutical world to larger industrial research establishments such as GE and DuPont, and even to smaller companies and universities. Consequently, researchers need to know the kinds of problems, desired outcomes, and appropriate patterns for these new strategies. Editor James Cawse's far-reaching study identifies and applies, with specific examples, these important new principles and techniques. Experimental Design for Combinatorial and High Throughput Materials Development progresses from methods that are now standard, such as gradient arrays, to mathematical developments that are breaking new ground. The former will be particularly useful to researchers entering the field, while the latter should inspire and challenge advanced practitioners. The book's contents are contributed by leading researchers in their respective fields. Chapters include: -High Throughput Synthetic Approaches for the Investigation of Inorganic Phase Space -Combinatorial Mapping of Polymer Blends Phase Behavior -Split-Plot Designs -Artificial Neural Networks in Catalyst Development -The Monte Carlo Approach to Library Design and Redesign This book also contains over 200 useful charts and drawings. Industrial chemists, chemical engineers, materials scientists, and physicists working in combinatorial and high throughput chemistry will find James Cawse's study to be an invaluable resource.

  9. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  10. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design.

    PubMed

    Yang, Y U; Bai, Wenkun; Chen, Yini; Lin, Yanduan; Hu, Bing

    2015-11-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm(2); frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)(6) orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained.

  11. Robust Design: Seeking the Best of All Possible Worlds

    DTIC Science & Technology

    2000-12-01

    S. M. Sanchez. 1992. A critique and enhancement of the Taguchi method . ASQC Quality Congress Transactions 491–498. Ramberg, J. S., S. M. Sanchez, P...J. Sanchez and L. W. Hollick. 1991. Designing simulation experiments: Taguchi methods and response surface metamodels. In Proceedings of the 1994

  12. Design and experimental study of a novel giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  13. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2012-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each configuration is associated to one value of the objective function that characterizes the quality of this particular design. Here, we describe the method used to optimize an experimental design. Then, we validate this new technique and explore the different issues of experimental design by simulating a CSEM survey with a realistic 1D layered model.

  14. Study of Titanium Alloy Sheet During H-sectioned Rolling Forming Using the Taguchi Method

    SciTech Connect

    Chen, D.-C.; Gu, W.-S.; Hwang, Y.-M.

    2007-05-17

    This study employs commercial DEFORM three-dimensional finite element code to investigate the plastic deformation behavior of Ti-6Al-4V titanium alloy sheet during the H-sectioned rolling process. The simulations are based on a rigid-plastic model and assume that the upper and lower rolls are rigid bodies and that the temperature rise induced during rolling is sufficiently small that it can be ignored. The effects of the roll profile, the friction factor between the rolls and the titanium alloy, the rolling temperature and the roll radii on the rolling force, the roll torque and the effective strain induced in the rolled product are examined. The Taguchi method is employed to optimize the H-sectioned rolling process parameters. The results confirm the effectiveness of this robust design methodology in optimizing the H-sectioned rolling process parameters for the current Ti-6Al-4V titanium alloy.

  15. Nitric acid treated multi-walled carbon nanotubes optimized by Taguchi method

    NASA Astrophysics Data System (ADS)

    Shamsuddin, Shahidah Arina; Derman, Mohd Nazree; Hashim, Uda; Kashif, Muhammad; Adam, Tijjani; Halim, Nur Hamidah Abdul; Tahir, Muhammad Faheem Mohd

    2016-07-01

    Electron transfer rate (ETR) of CNTs can be enhanced by increasing the amounts of COOH groups to their wall and opened tips. With the aim to achieve the highest production amount of COOH, Taguchi robust design has been used for the first time to optimize the surface modification of MWCNTs by nitric acid oxidation. Three main oxidation parameters which are concentration of acid, treatment temperature and treatment time have been selected as the control factors that will be optimized. The amounts of COOH produced are measured by using FTIR spectroscopy through the absorbance intensity. From the analysis, we found that acid concentration and treatment time had the most important influence on the production of COOH. Meanwhile, the treatment temperature will only give intermediate effect. The optimum amount of COOH can be achieved with the treatment by 8.0 M concentration of nitric acid at 120 °C for 2 hour.

  16. Process improvement in laser hot wire cladding for martensitic stainless steel based on the Taguchi method

    NASA Astrophysics Data System (ADS)

    Huang, Zilin; Wang, Gang; Wei, Shaopeng; Li, Changhong; Rong, Yiming

    2016-09-01

    Laser hot wire cladding, with the prominent features of low heat input, high energy efficiency, and high precision, is widely used for remanufacturing metal parts. The cladding process, however, needs to be improved by using a quantitative method. In this work, volumetric defect ratio was proposed as the criterion to describe the integrity of forming quality for cladding layers. Laser deposition experiments with FV520B, one of martensitic stainless steels, were designed by using the Taguchi method. Four process variables, namely, laser power ( P), scanning speed ( V s), wire feed rate ( V f), and wire current ( I), were optimized based on the analysis of signal-to-noise (S/N) ratio. Metallurgic observation of cladding layer was conducted to compare the forming quality and to validate the analysis method. A stable and continuous process with the optimum parameter combination produced uniform microstructure with minimal defects and cracks, which resulted in a good metallurgical bonding interface.

  17. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  18. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  19. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  20. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  1. Plant metabolomics: from experimental design to knowledge extraction.

    PubMed

    Rai, Amit; Umashankar, Shivshankar; Swarup, Sanjay

    2013-01-01

    Metabolomics is one of the most recent additions to the functional genomics approaches. It involves the use of analytical chemistry techniques to provide high-density data of metabolic profiles. Data is then analyzed using advanced statistics and databases to extract biological information, thus providing the metabolic phenotype of an organism. Large variety of metabolites produced by plants through the complex metabolic networks and their dynamic changes in response to various perturbations can be studied using metabolomics. Here, we describe the basic features of plant metabolic diversity and analytical methods to describe this diversity, which includes experimental workflows starting from experimental design, sample preparation, hardware and software choices, combined with knowledge extraction methods. Finally, we describe a scenario for using these workflows to identify differential metabolites and their pathways from complex biological samples.

  2. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  3. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  4. Teaching Experimental Design Using an Exercise in Protein Fractionation

    NASA Astrophysics Data System (ADS)

    Loke, J. P.; Hancock, D.; Johnston, J. M.; Dimauro, J.; Denyer, G. S.

    2001-11-01

    This experiment, suitable for introductory biochemistry courses, presents the techniques of protein purification as a problem-solving exercise. Students must identify and purify three proteins from an unknown mixture using the techniques of gel filtration, ion exchange chromatography, UV and visible spectrophotometry, and gel electrophoresis. To aid construction of a strategy, they are given some information about each of the possible proteins: source, function, molecular weight, pI, and UV and visible spectra. From this they must design their own purification protocols and carry out the experimental work. To develop students' computer skills, the experimental results and the logic used in the identification are presented as a short computer-generated report.

  5. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  6. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  7. Considerations in Writing About Single-Case Experimental Design Studies.

    PubMed

    Skolasky, Richard L

    2016-12-01

    Single-case experimental design (SCED) studies are particularly useful for examining the processes and outcomes of psychological and behavioral studies. Accurate reporting of SCED studies is critical in explaining the study to the reader and allowing replication. This paper outlines important elements that authors should cover when reporting the results of a SCED study. Authors should provide details on the participant, independent and dependent variables under examination, materials and procedures, and data analysis. Particular emphasis should be placed on justifying the assumptions made and explaining how violations of these assumptions may alter the results of the SCED study.

  8. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  9. Design of vibration compensation interferometer for Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, G. S.; Liu, H. Q.; Jie, Y. X.; Ding, W. X.; Brower, D. L.; Zhu, X.; Wang, Z. X.; Zeng, L.; Zou, Z. Y.; Wei, X. C.; Lan, T.

    2014-11-01

    A vibration compensation interferometer (wavelength at 0.532 μm) has been designed and tested for Experimental Advanced Superconducting Tokamak (EAST). It is designed as a sub-system for EAST far-infrared (wavelength at 432.5 μm) poloarimeter/interferometer system. Two Acoustic Optical Modulators have been applied to produce the 1 MHz intermediate frequency. The path length drift of the system is lower than 2 wavelengths within 10 min test, showing the system stability. The system sensitivity has been tested by applying a periodic vibration source on one mirror in the system. The vibration is measured and the result matches the source period. The system is expected to be installed on EAST by the end of 2014.

  10. Logical Experimental Design and Execution in the Biomedical Sciences.

    PubMed

    Holder, Daniel J; Marino, Michael J

    2017-03-17

    Lack of reproducibility has been highlighted as a significant problem in biomedical research. The present unit is devoted to describing ways to help ensure that research findings can be replicated by others, with a focus on the design and execution of laboratory experiments. Essential components for this include clearly defining the question being asked, using available information or information from pilot studies to aid in the design the experiment, and choosing manipulations under a logical framework based on Mill's "methods of knowing" to build confidence in putative causal links. Final experimental design requires systematic attention to detail, including the choice of controls, sample selection, blinding to avoid bias, and the use of power analysis to determine the sample size. Execution of the experiment is done with care to ensure that the independent variables are controlled and the measurements of the dependent variables are accurate. While there are always differences among laboratories with respect to technical expertise, equipment, and suppliers, execution of the steps itemized in this unit will ensure well-designed and well-executed experiments to answer any question in biomedical research. © 2017 by John Wiley & Sons, Inc.

  11. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  12. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  13. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  14. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    PubMed

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface.

  15. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  16. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  17. Experimental design in phylogenetics: testing predictions from expected information.

    PubMed

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  18. An intelligent approach to the discovery of luminescent materials using a combinatorial approach combined with Taguchi methodology.

    PubMed

    Chen, Lei; Chu, Cheng-I; Chen, Kuo-Ju; Chen, Po-Yuan; Hu, Shu-Fen; Liu, Ru-Shi

    2011-01-01

    A significant advance made in combinatorial approach research was that the emphasis shifted from simple mixing to intelligent screening, so as to improve the efficiency and accuracy of discovering new materials from a larger number of diverse compositions. In this study, the long-lasting luminescence of SrAl(2)O(4), which is co-doped with Eu(2+), Ce(3+), Dy(3+), Li(+) and H(3)BO(3), was investigated based on a combinatorial approach in conjunction with the Taguchi method. The minimal number of 16 samples to be tested (five dopants and four levels of concentration) were designed using the Taguchi method. The samples to be screened were synthesized using a parallel combinatorial strategy based on ink-jetting of precursors into an array of micro-reactor wells. The relative brightness of luminescence of the different phosphors over a particular period was assessed. Ce(3+) was identified as the constituent that detrimentally affected long-lasting luminescence. Its concentration was optimized to zero. Li(+) had a minor effect on long-lasting luminescence but the main factors that contributed to the objective property (long-lasting luminescence) were Eu(2+), Dy(3+) and H(3)BO(3), and the concentrations of these dopants were optimized to 0.020, 0.030 and 0.300, respectively, for co-doping into SrAl(2)O(4). This study demonstrates that the utility of the combinatorial approach for evaluating the effect of components on an objective property (e.g. phosphorescence) and estimating the expected performance under the optimal conditions can be improved by the Taguchi method.

  19. Experimental design schemes for learning Boolean network models

    PubMed Central

    Atias, Nir; Gershenzon, Michal; Labazin, Katia; Sharan, Roded

    2014-01-01

    Motivation: A holy grail of biological research is a working model of the cell. Current modeling frameworks, especially in the protein–protein interaction domain, are mostly topological in nature, calling for stronger and more expressive network models. One promising alternative is logic-based or Boolean network modeling, which was successfully applied to model signaling regulatory circuits in human. Learning such models requires observing the system under a sufficient number of different conditions. To date, the amount of measured data is the main bottleneck in learning informative Boolean models, underscoring the need for efficient experimental design strategies. Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit models. Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data. We applied both approaches to simulated and real data from the EFGR and IL1 signaling systems in human. We demonstrate the utility of the developed strategies in substantially improving on a random selection approach. Our design schemes highlight the redundancy in these datasets, leading up to 11-fold savings in the number of experiments to be performed. Availability and implementation: Source code will be made available upon acceptance of the manuscript. Contact: roded@post.tau.ac.il PMID:25161232

  20. Protein design algorithms predict viable resistance to an experimental antifolate.

    PubMed

    Reeve, Stephanie M; Gainza, Pablo; Frey, Kathleen M; Georgiev, Ivelin; Donald, Bruce R; Anderson, Amy C

    2015-01-20

    Methods to accurately predict potential drug target mutations in response to early-stage leads could drive the design of more resilient first generation drug candidates. In this study, a structure-based protein design algorithm (K* in the OSPREY suite) was used to prospectively identify single-nucleotide polymorphisms that confer resistance to an experimental inhibitor effective against dihydrofolate reductase (DHFR) from Staphylococcus aureus. Four of the top-ranked mutations in DHFR were found to be catalytically competent and resistant to the inhibitor. Selection of resistant bacteria in vitro reveals that two of the predicted mutations arise in the background of a compensatory mutation. Using enzyme kinetics, microbiology, and crystal structures of the complexes, we determined the fitness of the mutant enzymes and strains, the structural basis of resistance, and the compensatory relationship of the mutations. To our knowledge, this work illustrates the first application of protein design algorithms to prospectively predict viable resistance mutations that arise in bacteria under antibiotic pressure.

  1. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  2. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  3. Purification of d-a-tocopheryl polyethylene glycol 1000 succinate (TPGS) by a temperature-modulated silica gel column chromatography: use of Taguchi method to optimize purification conditions.

    PubMed

    Chang, Yinzi; Cao, Yucheng; Zhang, Jin; Wen, Yangyi; Ren, Qilong

    2011-12-05

    The demand for high purity d-a-tocopheryl polyethylene glycol 1000 succinate (TPGS) is increasing with the exploitation of TPGS-related products. Previously, we synthesized a TPGS mixture by esterifying vitamin E succinate with polyethyleneglycol 1000. In this study, a temperature-modulated silica gel chromatographic column was used to purify the synthesized TPGS. Taguchi method was used to optimize purification conditions associated with column temperature, loading amount, feedstock concentration and flow rate of mobile phases. Purification efficacy under the Taguchi optimized conditions was predicted theoretically and the predicted results were verified experimentally. High-performance liquid chromatography was used to quantify the unpurified and purified TPGS. The Taguchi-based analysis separately produced an optimum combination of purification conditions for TPGS purity and recovery. Under the optimized conditions, both the theoretical prediction and the confirmatory experiment yielded TPGS purity and recovery approximating to 98% each. Impressively, the study also found that column temperature had a considerable effect on purification efficacy, in particular on TPGS purity, although it was a less influential factor compared to loading amount and feedstock concentration.

  4. Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.

    1992-01-01

    An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.

  5. The Concept of Fashion Design on the Basis of Color Coordination Using White LED Lighting

    NASA Astrophysics Data System (ADS)

    Mizutani, Yumiko; Taguchi, Tsunemasa

    This thesis focuses on the development of fashion design, especially a dress coordinated with White LED Lighting (=LED). As for the design concept a fusion of the advanced science and local culture was aimed for. For such a reason this development is a very experimental one. Here in particular I handled an Imperial Court dinner dress for the last Japanese First Lady, Mrs. Akie Abe who wore it at the Imperial Court dinner for the Indonesian First Couple held on November 2006 to. This dress made by Prof. T. Taguchi and I open up a new field in the dress design.

  6. Study of cryopreservation of articular chondrocytes using the Taguchi method.

    PubMed

    Lyu, Shaw-Ruey; Wu, Wei Te; Hou, Chien Chih; Hsieh, Wen-Hsin

    2010-04-01

    This study evaluates the effect of control factors on cryopreservation of articular cartilage chondrocytes using the Taguchi method. Freeze-thaw experiments based on the L(8)(2(7)) two-level orthogonal array of the Taguchi method are conducted, and ANOVA (analysis of variables) is adopted to determine the statistically significant control factors that affect the viability of the cell. Results show that the type of cryoprotectant, freezing rate, thawing rate, and concentration of cryoprotectant (listed in the order of influence) are the statistically significant control factors that affect the post-thaw viability. The end temperature and durations of the first and second stages of freezing do not affect the post-thaw viability. Within the ranges of the control factors studied in this work, the optimal test condition is found to be a freezing rate of 0.61+/-0.03 degrees C/min, a thawing rate of 126.84+/-5.57 degrees C/min, Me(2)SO cryoprotectant, and a cryoprotectant concentration of 10% (v/v) for maximum cell viability. In addition, this study also explores the effect of cryopreservation on the expression of type II collagen using immunocytochemical staining and digital image processing. The results show that the ability of cryopreserved chondrocytes to express type II collagen is reduced within the first five days of monolayer culture.

  7. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  8. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  9. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  10. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  11. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  12. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall.

  13. Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using Taguchi and ANN statistical approach.

    PubMed

    Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath

    2016-01-01

    In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the Taguchi method and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the Taguchi method in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses.

  14. Numerical and experimental design of coaxial shallow geothermal energy systems

    NASA Astrophysics Data System (ADS)

    Raghavan, Niranjan

    Geothermal Energy has emerged as one of the front runners in the energy race because of its performance efficiency, abundance and production competitiveness. Today, geothermal energy is used in many regions of the world as a sustainable solution for decreasing dependence on fossil fuels and reducing health hazards. However, projects related to geothermal energy have not received their deserved recognition due to lack of computational tools associated with them and economic misconceptions related to their installation and functioning. This research focuses on numerical and experimental system design analysis of vertical shallow geothermal energy systems. The driving force is the temperature difference between a finite depth beneath the earth and its surface stimulates continuous exchange of thermal energy from sub-surface to the surface (a geothermal gradient is set up). This heat gradient is captured by the circulating refrigerant and thus, tapping the geothermal energy from shallow depths. Traditionally, U-bend systems, which consist of two one-inch pipes with a U-bend connector at the bottom, have been widely used in geothermal applications. Alternative systems include coaxial pipes (pipe-in-pipe) that are the main focus of this research. It has been studied that coaxial pipes have significantly higher thermal performance characteristics than U-bend pipes, with comparative production and installation costs. This makes them a viable design upgrade to the traditional piping systems. Analytical and numerical heat transfer analysis of the coaxial system is carried out with the help of ABAQUS software. It is tested by varying independent parameters such as materials, soil conditions and effect of thermal contact conductance on heat transfer characteristics. With the above information, this research aims at formulating a preliminary theoretical design setup for an experimental study to quantify and compare the heat transfer characteristics of U-bend and coaxial

  15. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  16. Application of Taguchi technique coupled with grey relational analysis for multiple performance characteristics optimization of EDM parameters on ST 42 steel

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Lusi, Nuraini

    2016-04-01

    The optimization technique of machining parameters considering multiple performance characteristics of non conventional machining EDM process using Taguchi method combined with grey relational analysis (GRA) is presented in this study. ST 42 steel was chosen as material work piece and graphite as electrode during this experiment. Performance characteristics such as material removal rate and overcut are selected to evaluated the effect of machining parameters. Current, pulse on time, pulse off time and discharging time/ Z down were selected as machining parameters. The experiments was conducted by varying that machining parameters in three different levels. Based on the Taguchi quality design concept, a L27 orthogonal array table was chosen for the experiments. By using the combination of GRA and Taguchi, the optimization of complicated multiple performance characteristics was transformed into the optimization of a single response performance index. Optimal levels of machining parameters were identified by using Grey Relational Analysis method. The statistical application of analysis of variance was used to determine the relatively significant machining parameters. The result of confirmation test indicted that the determined optimal combination of machining parameters effectively improve the performance characteristics of the machining EDM process on ST 42 steel.

  17. Application of Taguchi approach to optimize the sol-gel process of the quaternary Cu2ZnSnS4 with good optical properties

    NASA Astrophysics Data System (ADS)

    Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe

    2016-07-01

    Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.

  18. Development of a cell formation heuristic by considering realistic data using principal component analysis and Taguchi's method

    NASA Astrophysics Data System (ADS)

    Kumar, Shailendra; Sharma, Rajiv Kumar

    2015-12-01

    Over the last four decades of research, numerous cell formation algorithms have been developed and tested, still this research remains of interest to this day. Appropriate manufacturing cells formation is the first step in designing a cellular manufacturing system. In cellular manufacturing, consideration to manufacturing flexibility and production-related data is vital for cell formation. The consideration to this realistic data makes cell formation problem very complex and tedious. It leads to the invention and implementation of highly advanced and complex cell formation methods. In this paper an effort has been made to develop a simple and easy to understand/implement manufacturing cell formation heuristic procedure with considerations to the number of production and manufacturing flexibility-related parameters. The heuristic minimizes inter-cellular movement cost/time. Further, the proposed heuristic is modified for the application of principal component analysis and Taguchi's method. Numerical example is explained to illustrate the approach. A refinement in the results is observed with adoption of principal component analysis and Taguchi's method.

  19. Optimization of FS Welding Parameters for Improving Mechanical Behavior of AA2024-T351 Joints Based on Taguchi Method

    NASA Astrophysics Data System (ADS)

    Vidal, C.; Infante, V.

    2013-08-01

    In the present study, the design of an experiment technique, the Taguchi method, has been used to optimize the friction stir welding (FSW) parameters for improving mechanical behavior of AA2024-T351 joints. The parameters considered were vertical downward forging force, tool travel speed, and probe length. An orthogonal array of L9 (34) was used; ANOVA analyses were carried out to identify the significant factors affecting tensile strength (Global Efficiency to Tensile Strength—GETS), bending strength (Global Efficiency to Bending—GEB), and hardness field. The percentage contribution of each parameter was also determined. As a result of the Taguchi analysis in this study, the probe length is the most significant parameter on GETS, and the tool travel speed is the most important parameter affecting both the GEB and the hardness field. An algebraic model for predicting the best mechanical performance, namely fatigue resistance, was developed and the optimal FSW combination was determined using this model. The results obtained were validated by conducting confirmation tests, the results of which verify the adequacy and effectiveness of this approach.

  20. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  1. Exploring the Mahalanobis-Taguchi Approach to Extract Vehicle Prognostics and Diagnostics

    DTIC Science & Technology

    2014-06-01

    One component being developed within AELEIS is incorporation of the Mahalanobis-Taguchi System (MTS) to assist with identification of impending fault ...Mahalanobis-Taguchi System (MTS) to assist with identification of impending fault conditions along with fault identification. This paper presents an...single system level performance metric using Mahalanobis Distance (MD) and generate fault clusters based on MD values. MD thresholds derived from

  2. Using the Taguchi method for rapid quantitative PCR optimization with SYBR Green I.

    PubMed

    Thanakiatkrai, Phuvadol; Welch, Lindsey

    2012-01-01

    Here, we applied the Taguchi method, an engineering optimization process, to successfully determine the optimal conditions for three SYBR Green I-based quantitative PCR assays. This method balanced the effects of all factors and their associated levels by using an orthogonal array rather than a factorial array. Instead of running 27 experiments with the conventional factorial method, the Taguchi method achieved the same optimal conditions using only nine experiments, saving valuable resources.

  3. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  4. Experimental Designs for Testing Differences in Survival Among Salmonid Populations.

    SciTech Connect

    Hoffman, Annette; Busack, Craig; Knudsen, Craig

    1994-11-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing habitat are limiting factors to population abundance of spring chinook salmon (Oncorhynchus tshawyacha). Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  5. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGES

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; ...

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  6. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  7. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  8. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes.

  9. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method.

    PubMed

    Chen, Zhengsuo; Deng, Hongbo; Chen, Can; Yang, Ying; Xu, Heng

    2014-03-12

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues.The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions.

  10. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method

    PubMed Central

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852

  11. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at

  12. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  13. A Short Guide to Experimental Design and Analysis for Engineers

    DTIC Science & Technology

    2014-04-01

    measures and the single group design. The relevant statistical techniques are also discussed to help identify key quantitative methods for data...authors gain a basic understanding of design, measurement and statistical analysis to support military experiments. RELEASE LIMITATION Approved...designs including the simple experiment, matched-pairs, repeated- measures and the single group design. The relevant statistical techniques are also

  14. Normalization and experimental design for ChIP-chip data

    PubMed Central

    Peng, Shouyong; Alekseyenko, Artyom A; Larschan, Erica; Kuroda, Mitzi I; Park, Peter J

    2007-01-01

    Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip) has been widely used to investigate the DNA binding sites for a variety of proteins on a genome-wide scale. However, several issues in the processing and analysis of ChIP-chip data have not been resolved fully, including the effect of background (mock control) subtraction and normalization within and across arrays. Results The binding profiles of Drosophila male-specific lethal (MSL) complex on a tiling array provide a unique opportunity for investigating these topics, as it is known to bind on the X chromosome but not on the autosomes. These large bound and control regions on the same array allow clear evaluation of analytical methods. We introduce a novel normalization scheme specifically designed for ChIP-chip data from dual-channel arrays and demonstrate that this step is critical for correcting systematic dye-bias that may exist in the data. Subtraction of the mock (non-specific antibody or no antibody) control data is generally needed to eliminate the bias, but appropriate normalization obviates the need for mock experiments and increases the correlation among replicates. The idea underlying the normalization can be used subsequently to estimate the background noise level in each array for normalization across arrays. We demonstrate the effectiveness of the methods with the MSL complex binding data and other publicly available data. Conclusion Proper normalization is essential for ChIP-chip experiments. The proposed normalization technique can correct systematic errors and compensate for the lack of mock control data, thus reducing the experimental cost and producing more accurate results. PMID:17592629

  15. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  16. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  17. Parametric optimization for tumour identification: bioheat equation using ANOVA and the Taguchi method.

    PubMed

    Sudharsan, N M; Ng, E Y

    2000-01-01

    Breast cancer is the number one killer disease among women. It is known that early detection of a tumour ensures better prognosis and higher survival rate. In this paper an intelligent, inexpensive and non-invasive diagnostic tool is developed for aiding breast cancer detection objectively. This tool is based on thermographic scanning of the breast surface in conjunction with numerical simulation of the breast using the bioheat equation. The medical applications of thermographic scanning make use of the skin temperature as an indication of an underlying pathological process. The thermal pattern over a breast tumour reflects the vascular reaction to the abnormality. Hence an abnormal temperature pattern may be an indicator of an underlying tumour. Seven important parameters are identified and analysis of variance (ANOVA) is performed using a 2n design (n = number of parameters, 7). The effect and importance of the various parameters are analysed. Based on the above 2(7) design, the Taguchi method is used to optimize the parameters in order to ensure the signal from the tumour maximized compared with the noise from the other factors. The model predicts that the ideal setting for capturing the signal from the tumour is when the patient is at basal metabolic activity with a correspondingly lower subcutaneous perfusion in a low temperature environment.

  18. Optimization of microchannel heat sink using genetic algorithm and Taguchi method

    NASA Astrophysics Data System (ADS)

    Singh, Bhanu Pratap; Garg, Harry; Lall, Arun K.

    2016-04-01

    Active cooling using microchannel is a challenging area. The optimization and miniaturization of the devices is increasing the heat loads and affecting the operating performance of the system. The microchannel based cooling systems are widely used and overcomes most of the limitations of the existing solutions. Microchannels help in reducing dimensions and therefore finding many important applications in the microfluidics domain. The microchannel performance is related to the geometry, material and flow conditions. Optimized selection of controllable parameters is a key issue while designing the microchannel based cooling system. The proposed work presents a simulation based study according to Taguchi design of experiment with Reynolds number, aspect ratio and plenum length as input parameters to determine SN ratio. The objective of this study is to maximize the heat transfer. Mathematical models based on these parameters were developed which helps in global optimization using Genetic Algorithm. Genetic algorithm further employed to optimize the input parameters and generates global solution points for the proposed work. It was concluded that the optimized value for heat transfer coefficient and Nusselt number was 2620.888 W/m2K and 3.4708 as compare to values obtained through SN ratio based parametric study i.e. 2601.3687 W/m2K and 3.447 respectively. Hence an error of 0.744% and 0.68% was detected in heat transfer coefficient and Nusselt number respectively.

  19. Optimization of catalyst formation conditions for synthesis of carbon nanotubes using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pander, Adam; Hatta, Akimitsu; Furuta, Hiroshi

    2016-05-01

    A growth of Carbon Nanotubes (CNTs) suffers many difficulties in finding optimum growth parameters, reproducibility and mass-production, related to the large number of parameters influencing synthesis process. Choosing the proper parameters can be a time consuming process, and still may not give the optimal growth values. One of the possible solutions to decrease the number of the experiments, is to apply optimization methods to the design of the experiment parameter matrix. In this work, Taguchi method of designing experiments is applied to optimize the formation of iron catalyst during annealing process by analyzing average roughness and size of particles. The annealing parameters were: annealing time (tAN), hydrogen flow rate (fH2), temperature (TAN) and argon flow rate (fAr). Plots of signal-to-noise ratios showed that temperature and annealing time have the highest impact on final results of experiment. For more detailed study of the influence of parameters, the interaction plots of tested parameters were analyzed. For the final evaluation, CNT forests were grown on silicon substrates with AlOX/Fe catalyst by thermal chemical vapor deposition method. Based on obtained results, the average diameter of CNTs was decreased by 67% and reduced from 9.1 nm (multi-walled CNTs) to 3.0 nm (single-walled CNTs).

  20. Optimizations Of Coat-Hanger Die, Using Constraint Optimization Algorithm And Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lebaal, Nadhir; Schmidt, Fabrice; Puissant, Stephan

    2007-05-01

    Polymer extrusion is one of the most important manufacturing methods used today. A flat die, is commonly used to extrude thin thermoplastics sheets. If the channel geometry in a flat die is not designed properly, the velocity at the die exit may be perturbed, which can affect the thickness across the width of the die. The ultimate goal of this work is to optimize the die channel geometry in a way that a uniform velocity distribution is obtained at the die exit. While optimizing the exit velocity distribution, we have coupled three-dimensional extrusion simulation software Rem3D®, with an automatic constraint optimization algorithm to control the maximum allowable pressure drop in the die; according to this constraint we can control the pressure in the die (decrease the pressure while minimizing the velocity dispersion across the die exit). For this purpose, we investigate the effect of the design variables in the objective and constraint function by using Taguchi method. In the second study we use the global response surface method with Kriging interpolation to optimize flat die geometry. Two optimization results are presented according to the imposed constraint on the pressure. The optimum is obtained with a very fast convergence (2 iterations). To respect the constraint while ensuring a homogeneous distribution of velocity, the results with a less severe constraint offers the best minimum.

  1. Fatigue of NiTi SMA-pulley system using Taguchi and ANOVA

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel-titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA-pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA-pulley actuators and SMA actuators without pulley. Within the sample of SMA-pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA-pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA-pulley configuration. This novel observation will be applicable to many areas of SMA-pulley system applications development.

  2. Plasma arc cutting optimization parameters for aluminum alloy with two thickness by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Abdulnasser, B.; Bhuvenesh, R.

    2016-07-01

    Manufacturing companies define the qualities of thermal removing process based on the dimension and physical appearance of the cutting material surface. The surface roughness of the cutting area for the material and the material removal rate being removed during the manual plasma arc cutting process were importantly considered. Plasma arc cutter machine model PS-100 was used to cut the specimens made from aluminium alloy 1100 manually based on the selected parameters setting. Two different thicknesses of specimens, 3mm and 6mm were used. The material removal rate (MRR) was measured by determining the difference between the weight of specimens before and after the cutting process. The surface roughness (Ra) was measured by using MITUTOYO CS-3100 machine and analysis was conducted to determine the average roughness (Ra) value, Taguchi method was utilized as an experimental layout to obtain MRR and Ra values. The results indicate that the current and cutting speed is the most significant parameters, followed by the arc gap for both rate of material removal and surface roughness.

  3. Biodegradation of dye solution containing Malachite Green: optimization of effective parameters using Taguchi method.

    PubMed

    Daneshvar, N; Khataee, A R; Rasoulifard, M H; Pourhassan, M

    2007-05-08

    In this paper, optimization of biological decolorization of synthetic dye solution containing Malachite Green was investigated. The effect of temperature, initial pH of the solution, type of algae, dye concentration and time of the reaction was studied and optimized using Taguchi method. Sixteen experiments were required to study the effect of parameters on biodegradation of the dye. Each of experiments was repeated three times to calculate signal/noise (S/N). Our results showed that initial pH of the solution was the most effective parameter in comparison with others and the basic pH was favorable. In this study, we also optimized the experimental parameters and chose the best condition by determination effective factors. Based on the S/N ratio, the optimized conditions for dye removal were temperature 25 degrees C, initial pH 10, dye concentration 5 ppm, algae type Chlorella and time 2.5h. The stability and efficiency of Chlorella sp. in long-term repetitive operations were also examined.

  4. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  5. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  6. Analytical and experimental studies of the helical magnetohydrodynamic thruster design

    SciTech Connect

    Gilbert, J.B. II; Lin, T.F.

    1994-12-31

    This paper describes the results of analytical and experimental studies of a helical magnetohydrodynamic (MHD) seawater thruster using a 8-Tesla (T) solenoid magnet. The application of this work is in marine vehicle propulsion. Analytical models are developed to predict the performance of the helical MHD thruster in a closed-loop condition. The analytical results are compared with experimental data and good agreement is obtained.

  7. Optimizing conditions for production of high levels of soluble recombinant human growth hormone using Taguchi method.

    PubMed

    Savari, Marzieh; Zarkesh Esfahani, Sayyed Hamid; Edalati, Masoud; Biria, Davoud

    2015-10-01

    Human growth hormone (hGH) is synthesized and stored by somatotroph cells of the anterior pituitary gland and can effect on body metabolism. This protein can be used to treat hGH deficiency, Prader-Willi syndrome and Turner syndrome. The limitations in current technology for soluble recombinant protein production, such as inclusion body formation, decrease its usage for therapeutic purposes. To achieve high levels of soluble form of recombinant human growth hormone (rhGH) we used suitable host strain, appropriate induction temperature, induction time and culture media composition. For this purpose, 32 experiments were designed using Taguchi method and the levels of produced proteins in all 32 experiments were evaluated primarily by ELISA and dot blotting and finally the purified rhGH protein products assessed by SDS-PAGE and Western blotting techniques. Our results indicate that media, bacterial strains, temperature and induction time have significant effects on the production of rhGH. The low cultivation temperature of 25°C, TB media (with 3% ethanol and 0.6M glycerol), Origami strain and a 10-h induction time increased the solubility of human growth hormone.

  8. Laccase production by Coriolopsis caperata RCK2011: Optimization under solid state fermentation by Taguchi DOE methodology

    PubMed Central

    Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander

    2013-01-01

    Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following Taguchi design of experiment. An orthogonal array layout of L18 (21 × 37) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds−1 indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372

  9. Assessing accuracy of measurements for a Wingate Test using the Taguchi method.

    PubMed

    Franklin, Kathryn L; Gordon, Rae S; Davies, Bruce; Baker, Julien S

    2008-01-01

    The purpose of this study was to establish the effects of four variables on the results obtained for a Wingate Anaerobic Test (WAnT). This study used a 30 second WAnT and compared data collection and analysed in different ways in order to form conclusions as to the relative importance of the variables on the results. Data was collected simultaneously by a commercially available software correction system manufactured by Cranlea Ltd., (Birmingham, England) system and an alternative method of data collection which involves the direct measurement of the flywheel velocity and the brake force. Data was compared using a design of experiments technique, the Taguchi method. Four variables were examined - flywheel speed, braking force, moment of inertia of the flywheel, and time intervals over which the work and power were calculated. The choice of time interval was identified as the most influential variable on the results. While the other factors have an influence on the results, the decreased time interval over which the data is averaged gave 9.8% increase in work done, 40.75% increase in peak power and 13.1% increase in mean power.

  10. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  11. Multidisciplinary design optimization using response surface analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1992-01-01

    Aerospace conceptual vehicle design is a complex process which involves multidisciplinary studies of configuration and technology options considering many parameters at many values. NASA Langley's Vehicle Analysis Branch (VAB) has detailed computerized analysis capabilities in most of the key disciplines required by advanced vehicle design. Given a configuration, the capability exists to quickly determine its performance and lifecycle cost. The next step in vehicle design is to determine the best settings of design parameters that optimize the performance characteristics. Typical approach to design optimization is experience based, trial and error variation of many parameters one at a time where possible combinations usually number in the thousands. However, this approach can either lead to a very long and expensive design process or to a premature termination of the design process due to budget and/or schedule pressures. Furthermore, one variable at a time approach can not account for the interactions that occur among parts of systems and among disciplines. As a result, vehicle design may be far from optimal. Advanced multidisciplinary design optimization (MDO) methods are needed to direct the search in an efficient and intelligent manner in order to drastically reduce the number of candidate designs to be evaluated. The payoffs in terms of enhanced performance and reduced cost are significant. A literature review yields two such advanced MDO methods used in aerospace design optimization; Taguchi methods and response surface methods. Taguchi methods provide a systematic and efficient method for design optimization for performance and cost. However, response surface method (RSM) leads to a better, more accurate exploration of the parameter space and to estimated optimum conditions with a small expenditure on experimental data. These two methods are described.

  12. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  13. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  14. Findings in Experimental Psychology as Functioning Principles of Theatrical Design.

    ERIC Educational Resources Information Center

    Caldwell, George

    A gestalt approach to theatrical design seems to provide some ready and stable explanations for a number of issues in the scenic arts. Gestalt serves as the theoretical base for a number of experiments in psychology whose findings appear to delineate the principles of art to be used in scene design. The fundamental notion of gestalt theory…

  15. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  16. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided.

  17. Better Than a Petaflop: The Power of Efficient Experimental Design

    DTIC Science & Technology

    2011-12-01

    2004. “Data farming: Discovering surprise”. In Proceedings of the 2004 Winter Simulation Conference, edited by R. G. Ingalls, M. D. Rossetti , J. S...efficient experimental design”. In Proceedings of the 2009 Winter Simulation Conference, edited by M. D. Rossetti , R. R. Hill, B. Johansson, A. Dunkin

  18. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  19. Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using Taguchi approach.

    PubMed

    Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar

    2016-12-01

    Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.

  20. 78 FR 79622 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... Threatened Species: Designation of a Nonessential Experimental Population of Central Valley Spring-Run...), designate a nonessential experimental population of Central Valley spring-run Chinook salmon (Oncorhynchus... Valley spring-run Chinook salmon (hereafter, CV spring-run Chinook salmon) to the San Joaquin...

  1. Web-Based Learning Support for Experimental Design in Molecular Biology: A Top-Down Approach

    ERIC Educational Resources Information Center

    Aegerter-Wilmsen, Tinri; Hartog, Rob; Bisseling, Ton

    2003-01-01

    An important learning goal of a molecular biology curriculum is the attainment of a certain competence level in experimental design. Currently, undergraduate students are confronted with experimental approaches in textbooks, lectures and laboratory courses. However, most students do not reach a satisfactory level of competence in the designing of…

  2. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  3. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    loudspeakers. One of the pre-recorded voices was a simulated tactical operating commander ( TOC ) who provided the mission directives a Commander would...expect on a patrol mission. One of the experimenters also operated a soundboard with controls to activate pre-recorded TOC responses, facilitating...simulated interactions between the TOC and Commander; for example, one button allowed the TOC to respond “Roger” when the Commander called in mission

  4. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    PubMed Central

    Eriksson, Tobias J. R.; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N.; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio (SNR)≃15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  5. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  6. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  7. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    SciTech Connect

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  8. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  9. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  10. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed…

  11. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  12. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  13. EXPERIMENTAL DESIGN OF A FLUID-CONTROLLED HOT GAS VALVE

    DTIC Science & Technology

    Effort is described toward development of a hot gas jet reaction valve utilizing boundary layer techniques to control a high pressure, high...temperature gas stream. The result has been the successful design of a hot gas valve in a reaction control system utilizing fluid-controlled bi-stable

  14. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  15. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design.

  16. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  17. Design and experimental demonstration of optomechanical paddle nanocavities

    NASA Astrophysics Data System (ADS)

    Healey, Chris; Kaviani, Hamidreza; Wu, Marcelo; Khanaliloo, Behzad; Mitchell, Matthew; Hryciw, Aaron C.; Barclay, Paul E.

    2015-12-01

    We present the design, fabrication, and initial characterization of a paddle nanocavity consisting of a suspended sub-picogram nanomechanical resonator optomechanically coupled to a photonic crystal nanocavity. The optical and mechanical properties of the paddle nanocavity can be systematically designed and optimized, and the key characteristics including mechanical frequency can be easily tailored. Measurements under ambient conditions of a silicon paddle nanocavity demonstrate an optical mode with a quality factor Q o ˜ 6000 near 1550 nm and optomechanical coupling to several mechanical resonances with frequencies ω m / 2 π ˜ 12 - 64 MHz, effective masses m eff ˜ 350 - 650 fg, and mechanical quality factors Q m ˜ 44 - 327 . Paddle nanocavities are promising for optomechanical sensing and nonlinear optomechanics experiments.

  18. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X. A.

    2011-12-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on the acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each particular design needs to be quantified. Different quantities have been used to estimate the "goodness" of a model, most of them being sensitive to the eigenvalues of the corresponding inversion problem. Here we show a comparison of results obtained using different objective functions. Then, we simulate a CSEM survey with a realistic 1D structure and discuss the optimum recording parameters determined by our method.

  19. High-power CMUTs: design and experimental verification.

    PubMed

    Yamaner, F Yalçin; Olçum, Selim; Oğuz, H Kağan; Bozkurt, Ayhan; Köymen, Hayrettin; Atalar, Abdullah

    2012-06-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have great potential to compete with piezoelectric transducers in high-power applications. As the output pressures increase, nonlinearity of CMUT must be reconsidered and optimization is required to reduce harmonic distortions. In this paper, we describe a design approach in which uncollapsed CMUT array elements are sized so as to operate at the maximum radiation impedance and have gap heights such that the generated electrostatic force can sustain a plate displacement with full swing at the given drive amplitude. The proposed design enables high output pressures and low harmonic distortions at the output. An equivalent circuit model of the array is used that accurately simulates the uncollapsed mode of operation. The model facilities the design of CMUT parameters for high-pressure output, without the intensive need for computationally involved FEM tools. The optimized design requires a relatively thick plate compared with a conventional CMUT plate. Thus, we used a silicon wafer as the CMUT plate. The fabrication process involves an anodic bonding process for bonding the silicon plate with the glass substrate. To eliminate the bias voltage, which may cause charging problems, the CMUT array is driven with large continuous wave signals at half of the resonant frequency. The fabricated arrays are tested in an oil tank by applying a 125-V peak 5-cycle burst sinusoidal signal at 1.44 MHz. The applied voltage is increased until the plate is about to touch the bottom electrode to get the maximum peak displacement. The observed pressure is about 1.8 MPa with -28 dBc second harmonic at the surface of the array.

  20. Gladstone-Dale constant for CF4. [experimental design

    NASA Technical Reports Server (NTRS)

    Burner, A. W., Jr.; Goad, W. K.

    1980-01-01

    The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.

  1. Quasi-experimental study designs series - Paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Cara Ebert; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-03-29

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research - in particular for the evaluation of healthcare practice, programs and policy - because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions.

  2. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  3. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  4. Numerical simulation and experimental assessment for cold cylindrical deep drawing without blank-holder

    NASA Astrophysics Data System (ADS)

    Chiorescu, D.; Chiorescu, E.; Filipov, F.

    2016-08-01

    The metal forming process through plastic deformation, represented by deep drawing, is an extremely vast research field. In this article we analyse the influence of the die punch clearance, the average velocity in the active phase as well as of the lubrication on the deep drawing quality revealed by the thickness evenness on the finished product surface. For thorough research and in order to minimize the number of experimental trials, a fractional factorial design of TAGUCHI type was developed attached to an orthogonal array, thus analysing the contribution of the three aforementioned parameters to the quality of cylindrical deep drawing without a blank holder. In order to compare the experimental results, a conceptual 3D model of the system punch-blank-die was made, which respects entirely the geometry of the active elements and of the blank, but schematizes/approximates the material properties of the blank. Thus, using these simulations, we can investigate the variation of the deformation parameters throughout the drawing process: from the initial blank form to the final drawn part. The numerical simulation of the drawing of cylindrical cups was made using the ANSYS V14 program, the Explicit Dynamic module. Using the signal-to-noise ratio suggested by TAGUCHI, we determined the influence of each of the three parameters under study on deep drawing quality, as well as their optimal values.

  5. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  6. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  7. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  8. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  9. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  10. Design and experimental evaluation of flexible manipulator control algorithms

    SciTech Connect

    Kwon, D.S.; Hwang, D.H.; Babcock, S.M.; Kress, R.L.; Lew, J.Y.; Evans, M.S.

    1995-04-01

    Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. The use of long-reach manipulators is being seriously considered for this task. Because of high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to use hydraulic actuators and to exhibit significant structural flexibility. The controller has been designed to compensate for the hydraulic actuator dynamics by using a load-compensated velocity feedforward loop and to increase the bandwidth by using an inner pressure feedback loop. Shaping filter techniques have been applied as feedforward controllers to avoid structural vibrations during operation. Various types of shaping filter methods have been investigated. Among them, a new approach, referred to as a ``feedforward simulation filter`` that uses embedded simulation, has been presented.

  11. Improved field experimental designs and quantitative evaluation of aquatic ecosystems

    SciTech Connect

    McKenzie, D.H.; Thomas, J.M.

    1984-05-01

    The paired-station concept and a log transformed analysis of variance were used as methods to evaluate zooplankton density data collected during five years at an electrical generation station on Lake Michigan. To discuss the example and the field design necessary for a valid statistical analysis, considerable background is provided on the questions of selecting (1) sampling station pairs, (2) experimentwise error rates for multi-species analyses, (3) levels of Type I and II error rates, (4) procedures for conducting the field monitoring program, and (5) a discussion of the consequences of violating statistical assumptions. Details for estimating sample sizes necessary to detect changes of a specified magnitude are included. Both statistical and biological problems with monitoring programs (as now conducted) are addressed; serial correlation of successive observations in the time series obtained was identified as one principal statistical difficulty. The procedure reduces this problem to a level where statistical methods can be used confidently. 27 references, 4 figures, 2 tables.

  12. A passive exoskeleton with artificial tendons: design and experimental evaluation.

    PubMed

    van Dijk, Wietse; van der Kooij, Herman; Hekman, Edsko

    2011-01-01

    We developed a passive exoskeleton that was designed to minimize joint work during walking. The exoskeleton makes use of passive structures, called artificial tendons, acting in parallel with the leg. Artificial tendons are elastic elements that are able to store and redistribute energy over the human leg joints. The elastic characteristics of the tendons have been optimized to minimize the mechanical work of the human leg joints. In simulation the maximal reduction was 40 percent. The performance of the exoskeleton was evaluated in an experiment in which nine subjects participated. Energy expenditure and muscle activation were measured during three conditions: Normal walking, walking with the exoskeleton without artificial tendons, and walking with the exoskeleton with the artificial tendons. Normal walking was the most energy efficient. While walking with the exoskeleton, the artificial tendons only resulted in a negligibly small decrease in energy expenditure.

  13. Statistical evaluation of SAGE libraries: consequences for experimental design.

    PubMed

    Ruijter, Jan M; Van Kampen, Antoine H C; Baas, Frank

    2002-10-29

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags found in two SAGE libraries is hampered by the fact that each SAGE library is only one measurement: the necessary information on biological variation or experimental precision is not available. In the currently available tests, a measure of this variance is obtained from simulation or based on the properties of the tag distribution. To help the user of SAGE to decide between these tests, five different pairwise tests have been compared by determining the critical values, that is, the lowest number of tags that, given an observed number of tags in one library, needs to be found in the other library to result in a significant P value. The five tests included in this comparison are SAGE300, the tests described by Madden et al. (Oncogene 15: 1079-1085, 1997) and by Audic and Claverie (Genome Res 7: 986-995, 1997), Fisher's Exact test, and the Z test, which is equivalent to the chi-squared test. The comparison showed that, for SAGE libraries of equal as well as different size, SAGE300, Fisher's Exact test, Z test, and the Audic and Claverie test have critical values within 1.5% of each other. This indicates that these four tests will give essentially the same results when applied to SAGE libraries. The Madden test, which can only be used for libraries of similar size, is, with 25% higher critical values, more conservative, probably because the variance measure in its test statistic is not appropriate for hypothesis testing. The consequences for the choice of SAGE library sizes are discussed.

  14. Experimental design for assessing the effectiveness of autonomous countermine systems

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; May, Michael; Moses, Franklin L.

    2010-04-01

    The countermine mission (CM) is a compelling example of what autonomous systems must address to reduce risks that Soldiers take routinely. The list of requirements is formidable and includes autonomous navigation, autonomous sensor scanning, platform mobility and stability, mobile manipulation, automatic target recognition (ATR), and systematic integration and control of components. This paper compares and contrasts how the CM is done today against the challenges of achieving comparable performance using autonomous systems. The Soldier sets a high standard with, for example, over 90% probability of detection (Pd) of metallic and low-metal mines and a false alarm rate (FAR) as low as 0.05/m2. In this paper, we suggest a simplification of the semi-autonomous CM by breaking it into three components: sensor head maneuver, robot navigation, and kill-chain prosecution. We also discuss the measurements required to map the system's physical and state attributes to performance specifications and note that current Army countermine metrics are insufficient to the guide the design of a semi-autonomous countermine system.

  15. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  16. Experimental design and study of Free Rotor River Turbine

    SciTech Connect

    Nepali, D.B.

    1987-01-01

    Terrace irrigation along the rivers of Nepal is the vital problem of farmers in the remote villages. The existing turbines and irrigation systems are not feasible without civil structures, and suffer from the lack of resources and financial problems. A simple and inexpensive underwater Free Rotor River Turbine (FRRT) which extracts power ranging from a fraction of a HP up to 25 HP from the velocity of the running water in a river or stream was developed. The power obtained from the turbine can be used to run a pump to lift water for drinking purposes and for irrigation along the river banks during the dry season and early part of the wet season. Various designs of models have been tested in the laboratory to find the optimum pitch angle, shape and size of blades, and optimum number of blades in order to accomplish the cheapest, simplest, and most efficient turbine. The effect of diameter of turbine, velocity of water and torque produced by the turbines were studied,and the effect of simple linear twist on blades is discussed.

  17. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  18. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    PubMed

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2016-11-24

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy.

  19. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  20. Factorial analysis of variables influencing mechanical characteristics of a single tooth implant placed in the maxilla using finite element analysis and the statistics-based Taguchi method.

    PubMed

    Lin, Chun-Li; Chang, Shih-Hao; Chang, Wen-Jen; Kuo, Yu-Chan

    2007-10-01

    The aim of this study was to determine the relative contribution of changes (design factors) in implant system, position, bone classification, and loading condition on the biomechanical response of a single-unit implant-supported restoration. Non-linear finite-element analysis was used to simulate the mechanical responses in an implant placed in the maxillary posterior region. The Taguchi method was employed to identify the significance of each design factor in controlling the strain/stress. Increased strain values were noted in the cortical bone with lateral force and an implant with a retaining-screw connection. Cancellous bone strain was affected primarily by bone type and increased with decreasing bone density. Implant stress was influenced mainly by implant type and position. The combined use of finite-element analysis and the Taguchi method facilitated effective evaluation of the mechanical characteristics of a single-unit implant-supported restoration. Implants placed along the axis of loading exhibit improved stress/strain distribution. The reduction of lateral stress through implant placement and selective occlusal adjustment is recommended. An implant with a tapered interference fit connection performed better as a force-transmission mechanism than other configurations.

  1. The Optimizing Conditions by Taguchi Method for Fabricating Semi-Solid Al-Zn-Mg Alloy Slurry by Cooling Plate Method

    NASA Astrophysics Data System (ADS)

    Shim, Sung-Yong; Park, Hyung-Won; Jeong, In-Sang; Lim, Su-Gun

    In order to optimize the condition for the semi-solid Al-Zn-Mg aluminium alloy fabricated by cooling plate method, the Taguchi design was used. The cooling plate method effectively separating the grains formed from the mold wall can be used to form a semi-solid material by flowing molten metal over an inclined Cu plate and casting in a mold for the near-net shape component. In Taguchi's design method, the higher signal vs noise (S/N) ratio the better. Therefore, the manufacturing conditions were arranged as a table of orthogonal arrays (L9(34)), and the influence of two factors, pouring temperature and cooling plate angle, was examined. From the observed microstructures, the grain size and aspect ratio were measured by image analyzer. The results indicated that the pouring temperature exerts the main effect on the spherical microstructures since the S/N ratio, which is the sensibility of the surrounding environment, was the highest. The optimum condition for the Al-Zn-Mg alloy was a cooling plate angle of 40° and a pouring temperature of 680°C. The grain size and aspect ratio were 70 μm and 1.3, respectively.

  2. Design and experimental tests of free electron laser wire scanners

    NASA Astrophysics Data System (ADS)

    Orlandi, G. L.; Heimgartner, P.; Ischebeck, R.; Loch, C. Ozkan; Trovati, S.; Valitutti, P.; Schlott, V.; Ferianis, M.; Penco, G.

    2016-09-01

    SwissFEL is a x-rays free electron laser (FEL) driven by a 5.8 GeV linac under construction at Paul Scherrer Institut. In SwissFEL, wire scanners (WSCs) will be complementary to view-screens for emittance measurements and routinely used to monitor the transverse profile of the electron beam during FEL operations. The SwissFEL WSC is composed of an in-vacuum beam-probe—motorized by a stepper motor—and an out-vacuum pick-up of the wire signal. The mechanical stability of the WSC in-vacuum hardware has been characterized on a test bench. In particular, the motor induced vibrations of the wire have been measured and mapped for different motor speeds. Electron-beam tests of the entire WSC setup together with different wire materials have been carried out at the 250 MeV SwissFEL Injector Test Facility (SITF, Paul Scherrer Institut, CH) and at FERMI (Elettra-Sincrotrone Trieste, Italy). In particular, a comparative study of the relative measurement accuracy and the radiation-dose release of Al (99 )∶Si (1 ) and tungsten (W) wires has been carried out. On the basis of the outcome of the bench and electron-beam tests, the SwissFEL WSC can be qualified as a high resolution and machine-saving diagnostic tool in consideration of the mechanical stability of the scanning wire at the micrometer level and the choice of the wire material ensuring a drastic reduction of the radiation-dose release with respect to conventional metallic wires. The main aspects of the design, laboratory characterization and electron beam tests of the SwissFEL WSCs are presented.

  3. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  4. Design and experimental tests of a novel neutron spin analyzer for wide angle spin echo spectrometers

    SciTech Connect

    Fouquet, Peter; Farago, Bela; Andersen, Ken H.; Bentley, Phillip M.; Pastrello, Gilles; Sutton, Iain; Thaveron, Eric; Thomas, Frederic; Moskvin, Evgeny; Pappas, Catherine

    2009-09-15

    This paper describes the design and experimental tests of a novel neutron spin analyzer optimized for wide angle spin echo spectrometers. The new design is based on nonremanent magnetic supermirrors, which are magnetized by vertical magnetic fields created by NdFeB high field permanent magnets. The solution presented here gives stable performance at moderate costs in contrast to designs invoking remanent supermirrors. In the experimental part of this paper we demonstrate that the new design performs well in terms of polarization, transmission, and that high quality neutron spin echo spectra can be measured.

  5. Some Occupational and Organizational Implications for Designing an Experimental Program in Educational Administration

    ERIC Educational Resources Information Center

    Evan, William M.

    1973-01-01

    Attempts to design an experimental program for the training of a new generation of educational administrators, with the rationale being based on selected concepts and propositions of occupational sociology, organizational theory, and systems theory. (Author)

  6. Understanding Complexity and Self-Organization in a Defense Program Management Organization (Experimental Design)

    DTIC Science & Technology

    2016-03-18

    SPONSORED REPORT SERIES Understanding Complexity and Self-Organization in a Defense Program Management Organization (Experimental Design...ID Program Manager and has had multiple operational and acquisition related tours. He is a 1995 graduate of the U.S. Naval Test Pilot School with...Organization in a Defense Program Management Organization (Experimental Design) 18 March 2016 Raymond Jones, Lecturer Graduate School of Business

  7. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  8. 78 FR 3381 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... Species: Designation of a Nonessential Experimental Population of Central Valley Spring-Run Chinook Salmon... experimental population of Central Valley spring-run Chinook salmon (Oncorhynchus tshawytscha) under section 10... Restoration Goal shall include the reintroduction of Central Valley spring-run Chinook salmon (hereafter,...

  9. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  10. Investigation and Parameter Optimization of a Hydraulic Ram Pump Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sarma, Dhrupad; Das, Monotosh; Brahma, Bipul; Pandwar, Deepak; Rongphar, Sermirlong; Rahman, Mafidur

    2016-10-01

    The main objective of this research work is to investigate the effect of Waste Valve height and Pressure Chamber height on the output flow rate of a Hydraulic ram pump. Also the second objective of this work is to optimize them for a hydraulic ram pump delivering water up to a height of 3.81 m (12.5 feet ) from the ground with a drive head (inlet head) of 1.86 m (6.11 feet). Two one-factor-at-a-time experiments have been conducted to decide the levels of the selected input parameters. After deciding the input parameters, an experiment has been designed using Taguchi's L9 Orthogonal Array with three repetitions. Analysis of Variance (ANOVA) is carried out to verify the significance of effect of the factors on the output flow rate of the pump. Results show that the height of the Waste Valve and height of the Pressure Chamber have significant effect on the outlet flow of the pump. For a pump of drive pipe diameter (inlet pipe) 31.75 mm (1.25 in.) and delivery pipe diameter of 12.7 mm (0.5 in.) the optimum setting was found out to be at a height of 114.3 mm (4.5 in.) of the Waste Valve and 406.4 mm (16 in.) of the Pressure vessel providing a delivery flow rate of 93.14 l per hour. For the same pump estimated range of output flow rate is, 90.65-94.97 l/h.

  11. Optical design and multiobjective optimization of miniature zoom optics with liquid lens element.

    PubMed

    Sun, Jung-Hung; Hsueh, Bo-Ren; Fang, Yi-Chin; MacDonald, John; Hu, Chao-Chang

    2009-03-20

    We propose an optical design for miniature 2.5x zoom fold optics with liquid elements. First, we reduce the volumetric size of the system. Second, this newly developed design significantly reduces the number of moving groups for this 2.5x miniature zoom optics (with only two moving groups compared with the four or five groups of the traditional zoom lens system), thanks to the assistance of liquid lens elements in particular. With regard to the extended optimization of this zoom optics, relative illuminance (RI) and the modulation transfer function (MTF) are considered because the more rays passing through the edge of the image, the lower will be the MTF, at high spatial frequencies in particular. Extended optimization employs the integration of the Taguchi method and the robust multiple criterion optimization (RMCO) approach. In this approach, a Pareto optimal robust design solution is set with the aid of a certain design of the experimental set, which uses analysis of variance results to quantify the relative dominance and significance of the design factors. It is concluded that the Taguchi method and RMCO approach is successful in optimizing the RI and MTF values of the fold 2.5x zoom lens system and yields better and more balanced performance, which is very difficult for the traditional least damping square method to achieve.

  12. Creating A Data Base For Design Of An Impeller

    NASA Technical Reports Server (NTRS)

    Prueger, George H.; Chen, Wei-Chung

    1993-01-01

    Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.

  13. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  14. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  15. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  16. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  17. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  18. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  19. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  20. Exploiting Distance Technology to Foster Experimental Design as a Neglected Learning Objective in Labwork in Chemistry

    ERIC Educational Resources Information Center

    d'Ham, Cedric; de Vries, Erica; Girault, Isabelle; Marzin, Patricia

    2004-01-01

    This paper deals with the design process of a remote laboratory for labwork in chemistry. In particular, it focuses on the mutual dependency of theoretical conjectures about learning in the experimental sciences and technological opportunities in creating learning environments. The design process involves a detailed analysis of the expert task and…

  1. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  2. A Cross-Over Experimental Design for Testing Audiovisual Training Materials.

    ERIC Educational Resources Information Center

    Stolovitch, Harold D.; Bordeleau, Pierre

    This paper contains a description of the cross-over type of experimental design as well as a case study of its use in field testing audiovisual materials related to teaching handicapped children. Increased efficiency is an advantage of the cross-over design, while difficulty in selecting similar format audiovisual materials for field testing is a…

  3. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  4. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  5. Experimental Control and Threats to Internal Validity of Concurrent and Nonconcurrent Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Christ, Theodore J.

    2007-01-01

    Single-case research designs are often applied within school psychology. This article provides a critical review of the scientific merit of both concurrent and nonconcurrent multiple baseline (MB) designs, relative to their capacity to assess threats of internal validity and establish experimental control. Distinctions are established between AB…

  6. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  7. How experimental design can improve the validation process. Studies in pharmaceutical analysis.

    PubMed

    Furlanetto, S; Orlandini, S; Mura, P; Sergent, M; Pinzauti, S

    2003-11-01

    A critical discussion about the possibility of improving the method validation process by means of experimental design is presented. The reported multivariate strategies concern the evaluation of the performance parameters robustness and intermediate precision, and the optimisation of bias and repeatability. In particular, accuracy and precision improvement constitutes a special subset of experimental design in which the bias and the relative standard deviation of the assay are optimised. D-optimal design was used in order to plan experiments for this aim. The analytical methods considered were capillary electrophoresis, HPLC, adsorptive stripping voltammetry and differential pulse polarography. All methods were applied to real pharmaceutical analysis problems.

  8. The User-Assisted Automated Experimental (TEST) Design Program (AED): Version II.

    DTIC Science & Technology

    1983-01-01

    ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK System Development Corporation AREA & WORK UNIT NUMBERS 4134 Linden Avenue Suite 305 62202F, 7184-00-09...pro- cedures and which maximize information return while minimizing the number of observations (tests) required. The overall experimental design...E. Taylor, SDC Colorado Springs, CO, for his work on the Central Composite Design, Mr. Edwin G. Meyer who developed many of the algorithms and

  9. An Automated Tool for Developing Experimental Designs: The Computer-Aided Design Reference for Experiments (CADRE)

    DTIC Science & Technology

    2009-01-01

    survey procedures, and cognitive task analysis), system design methods (e.g., focus groups , design guidelines, specifications, and requirements), and...LABORATORY - HRED ATTN AMSRD ARL HR MZ A DAVISON 199 E 4TH ST STE C TECH PARK BLG 2 FT LEONARD WOOD MO 65473-1949 1 ARMY RSCH LABORATORY

  10. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation.

  11. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  12. Thermal-hydraulic design issues and analysis for the ITER (International Thermonuclear Experimental Reactor) divertor

    SciTech Connect

    Koski, J.A.; Watson, R.D. ); Hassanien, A.M. ); Goranson, P.L. . Fusion Engineering Design Center); Salmonson, J.C. . Special Projects)

    1990-01-01

    Critical Heat Flux (CHF), also called burnout, is one of the major design limits for water-cooled divertors in tokamaks. Another important design issue is the correct thermal modeling of the divertor plate geometry where heat is applied to only one side of the plate and highly subcooled flow boiling in internal passages is used for heat removal. This paper discusses analytical techniques developed to address these design issues, and the experimental evidence gathered in support of the approach. Typical water-cooled divertor designs for the International Thermonuclear Experimental Reactor (ITER) are analyzed, and design margins estimated. Peaking of the heat flux at the tube-water boundary is shown to be an important issue, and design concerns which could lead to imposing large design safety margins are identified. The use of flow enhancement techniques such as internal twisted tapes and fins are discussed, and some estimates of the gains in the design margin are presented. Finally, unresolved issues and concerns regarding hydraulic design of divertors are summarized, and some experiments which could help the ITER final design process identified. 23 refs., 10 figs.

  13. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  14. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  15. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines.

  16. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  17. Statistical evaluation and experimental design of a psoriasis xenograft transplantation model treated with cyclosporin A.

    PubMed

    Stenderup, Karin; Rosada, Cecilia; Alifrangis, Lene; Andersen, Søren; Dam, Tomas Norman

    2011-05-01

    Psoriasis xenograft transplantation models where human skin is transplanted onto immune-deficient mice are generally accepted in psoriasis research. Over the last decade, they have been widely employed to screen for new therapeutics with a potential anti-psoriatic effect. However, experimental designs differ in several parameters. Especially, the number of donors and grafts per experimental design varies greatly; numbers that are directly related to the probability of detecting statistically significant drug effects. In this study, we performed a statistical evaluation of the effect of cyclosporine A, a recognized anti-psoriatic drug, to generate a statistical model employable to simulate different scenarios of experimental designs and to calculate the associated statistical study power, defined as the probability of detecting a statistically significant anti-psoriatic drug treatment effect. Results showed that to achieve a study power of 0.8, at least 20 grafts per treatment group and a minimum of five donors should be included in the chosen experimental setting. To our knowledge, this is the first time that study power calculations have been performed to evaluate treatment effects in a psoriasis xenograft transplantation model. This study was based on a defined experimental protocol, thus other parameters such as drug potency, treatment protocol, mouse strain and graft size should, also, be taken into account when designing an experiment. We propose that the results obtained in this study may lend a more quantitative support to the validity of results obtained when exploring new potential anti-psoriatic drug effects.

  18. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  19. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  20. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  1. Film analysis of activated sludge microbial discs by the Taguchi method and grey relational analysis.

    PubMed

    Chen, M Y; Syu, M J

    2003-12-01

    A biofilm model with substrate inhibition is proposed for the activated sludge growing discs of rotating biological contactor (RBC); this model is different from the steady-state biofilm model based on the Monod assumption. Both deep and shallow types of biofilms are examined and discussed. The biofilm models based on both Monod and substrate inhibition (Haldane) assumptions are compared. In addition, the relationships between substrate utilization rate, biofilm thickness, and liquid phase substrate concentration are discussed. The influence order of the factors that affect the biofilm thickness is studied and discussed by combining the Taguchi method and grey relational analysis. In this work, a Taguchi orthogonal table is used to construct the series that is needed for grey relational analysis to determine the influence priority of the four parameters S(B), kX(f), K(s), and K(i).

  2. Taguchi optimization: Case study of gold recovery from amalgamation tailing by using froth flotation method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Aji, B. B.; Sumardi, S.; Mufakir, F. R.; Junaidi, A.; Nurjaman, F.; Karna, Aziza, Aulia

    2017-01-01

    Gold amalgamation process was widely used to treat gold ore. This process produces the tailing or amalgamation solid waste, which still contains gold at 8-9 ppm. Froth flotation is one of the promising methods to beneficiate gold from this tailing. However, this process requires optimal conditions which depends on the type of raw material. In this study, Taguchi method was used to optimize the optimum conditions of the froth flotation process. The Taguchi optimization shows that the gold recovery was strongly influenced by the particle size which is the best particle size at 150 mesh followed by the Potassium amyl xanthate concentration, pH and pine oil concentration at 1133.98, 4535.92 and 68.04 gr/ton amalgamation tailing, respectively.

  3. Optimization of Physical Working Environment Setting to Improve Productivity and Minimize Error by Taguchi and VIKOR Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety

    2016-01-01

    The working environment is one factor that has contribution to the worker's performance, especially for continuous and monotonous works. L9 Taguchi design experiment for inner array is used to design the experiment which was carried out in laboratory whereas L4 is for outer array. Four control variables with three levels of each are used to get the optimal combination of working environment setting. Four responses are also measured to know the effect of four control factors. Results shown that by using ANOVA, the effect of illumination, temperature, and instrumental music to the number of ouput, number of error, and rating perceived discomfort is significant with the total variance explained of 54,67%, 60,67%, and 75,22% respectively. By using VIKOR method, it yields the optimal combination of experiment 66 with the setting condition of A3-B2-C1-D3. The illumination is 325-350 lux, temperature is 240-260C, fast category of instrumental music, and 70-80 dB for intensity of the music being played.

  4. Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm.

    PubMed

    Tsai, Jinn-Tsong; Chou, Jyh-Horng; Liu, Tung-Kuan

    2006-01-01

    In this paper, a hybrid Taguchi-genetic algorithm (HTGA) is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network. The HTGA approach is a method of combining the traditional genetic algorithm (TGA), which has a powerful global exploration capability, with the Taguchi method, which can exploit the optimum offspring. The Taguchi method is inserted between crossover and mutation operations of a TGA. Then, the systematic reasoning ability of the Taguchi method is incorporated in the crossover operations to select the better genes to achieve crossover, and consequently enhance the genetic algorithms. Therefore, the HTGA approach can be more robust, statistically sound, and quickly convergent. First, the authors evaluate the performance of the presented HTGA approach by studying some global numerical optimization problems. Then, the presented HTGA approach is effectively applied to solve three examples on forecasting the sunspot numbers, tuning the associative memory, and solving the XOR problem. The numbers of hidden nodes and the links of the feedforward neural network are chosen by increasing them from small numbers until the learning performance is good enough. As a result, a partially connected feedforward neural network can be obtained after tuning. This implies that the cost of implementation of the neural network can be reduced. In these studied problems of tuning both network structure and parameters of a feedforward neural network, there are many parameters and numerous local optima so that these studied problems are challenging enough for evaluating the performances of any proposed GA-based approaches. The computational experiments show that the presented HTGA approach can obtain better results than the existing method reported recently in the literature.

  5. Chemometric experimental design based optimization techniques in capillary electrophoresis: a critical review of modern applications.

    PubMed

    Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A

    2008-01-01

    A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.

  6. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    DTIC Science & Technology

    2015-04-01

    Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot Leah Kelley*a, Saam Ostovari**b, Aaron B...magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative...motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while

  7. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  8. Using R in experimental design with BIBD: An application in health sciences

    NASA Astrophysics Data System (ADS)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  9. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  10. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  11. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  12. Optimization of parameters by Taguchi method for controlling purity of carbon nanotubes in chemical vapour deposition technique.

    PubMed

    Dasgupta, K; Sen, D; Mazumder, S; Basak, C B; Joshi, J B; Banerjee, S

    2010-06-01

    The process parameters (viz. temperature of synthesis, type of catalyst, concentration of catalyst and type of catalyst-support material) for controlling purity of carbon nanotubes synthesized by catalytic chemical vapour deposition of acetylene have been optimized by analyzing the experimental results using Taguchi method. It has been observed that the catalyst-support material has the maximum (59.4%) and the temperature of synthesis has the minimum effect (2.1%) on purity of the nanotubes. At optimum condition (15% ferrocene supported on carbon black at the synthesis temperature of 700 degrees C) the purity of nanotubes was found out to be 96.2% with yield of 1900%. Thermogravimetry has been used to assess purity of nanotubes. These nantubes have been further characterized by scanning electron microscopy, transmission electron microscopy and Raman Spectroscopy. Small angle neutron scattering has been used to find out their average inner and outer diameter using an appropriate model. The nanotubes are well crystallized but with wide range of diameter varying between 20-150 nm.

  13. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  14. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  15. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    ERIC Educational Resources Information Center

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  16. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  17. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  18. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  19. Teaching in Cyberspace: Online versus Traditional Instruction Using a Waiting-List Experimental Design

    ERIC Educational Resources Information Center

    Poirier, Christopher R.; Feldman, Robert S.

    2004-01-01

    To test the effectiveness of an online introductory psychology course, we randomly assigned students to a large, traditional course or to an online course from a population of students who indicated that either course type was acceptable using a "waiting list" experimental design. Students in the online course performed better on exams and equally…

  20. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  1. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  2. Experimental design applied to the formulation of lipsticks with particular features.

    PubMed

    Zanotti, F; Masiello, S; Bader, S; Guarneri, M; Vojnovic, D

    1998-08-01

    In our work a non-classical experimental design was applied to obtain lipsticks endowed with particular characteristics. Our aim was to formulate lipsticks that leave a brilliant and shiny colour application and have a transparent look. The emollient substances and the waxes (consistency factors) were identified as the main variables of the system. A two phase experimental strategy was thought out: the optimal quantities of consistency factors were selected using a Doehlert experimental matrix, whereas the correct mixtures of emollients were determined using a Scheffé simplex-centroid design. These two design were combined and a set of 49 experiments was obtained. The experiments carried out allowed the definition of a zone of two phases in which the objectives were attained: the correct types and appropriate quantities of emollients and waxes were determined. To find a possible correlation between some mixtures and the lipsticks' sensorial behaviour, differential scanning calorimetry was used. These results, in addition to those obtained using the experimental design allowed us to select the best lipstick formula. (c) Rapid Science Ltd. 1998.

  3. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  4. EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY. FINAL REPORT.

    ERIC Educational Resources Information Center

    KOHR, RICHARD L.; WOLFE, GEORGE P.

    AN EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY WAS UNDERTAKEN TO DEVELOP A PROPOSED CURRICULUM OUTLINE AND ADMISSION STANDARDS FOR OTHER INSTITUTIONS IN THE PLANNING OF PROGRAMS TO TRAIN COMPUTER PROGRAMMERS. OF THE FIRST CLASS OF 26 STUDENTS, 17 COMPLETED THE PROGRAM AND 12 (INCLUDING ONE WHO DID NOT GRADUATE) WERE…

  5. Building upon the Experimental Design in Media Violence Research: The Importance of Including Receiver Interpretations.

    ERIC Educational Resources Information Center

    Potter, W. James; Tomasello, Tami K.

    2003-01-01

    Argues that the inclusion of viewer interpretation variables in experimental design and analysis procedures can greatly increase the methodology's ability to explain variance. Focuses attention on the between-group differences, while an analysis of how individual participants interpret the cues in the stimulus material focused attention on the…

  6. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  7. SELF-INSTRUCTIONAL SUPPLEMENTS FOR A TELEVISED PHYSICS COURSE, STUDY PLAN AND EXPERIMENTAL DESIGN.

    ERIC Educational Resources Information Center

    KLAUS, DAVID J.; LUMSDAINE, ARTHUR A.

    THE INITIAL PHASES OF A STUDY OF SELF-INSTRUCTIONAL AIDS FOR A TELEVISED PHYSICS COURSE WERE DESCRIBED. THE APPROACH, EXPERIMENTAL DESIGN, PROCEDURE, AND TECHNICAL ASPECTS OF THE STUDY PLAN WERE INCLUDED. THE MATERIALS WERE PREPARED TO SUPPLEMENT THE SECOND SEMESTER OF HIGH SCHOOL PHYSICS. THE MATERIAL COVERED STATIC AND CURRENT ELECTRICITY,…

  8. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not…

  9. Quiet Clean Short-haul Experimental Engine (QCSEE) Over The Wing (OTW) design report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and testing of two experimental high bypass geared turbofan engines and propulsion systems for short haul passenger aircraft are described. The propulsion technology required for future externally blown flap aircraft with engines located both under the wing and over the wing is demonstrated. Composite structures and digital engine controls are among the topics included.

  10. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  11. Reduction of Error by Matching Subjects in the Two-Group Experimental Design.

    ERIC Educational Resources Information Center

    Swank, Paul; Schmid, John

    1978-01-01

    McNemar's discussion of error reduction in a mixed two-group design fails to provide an algebraic presentation of this error reduction. This paper presents not only the algebraic development of the error term but shows that under certain conditions mixing may increase experimental error. (Author)

  12. Bayesian experimental design for identification of model propositions and conceptual model uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2015-09-01

    The lack of hydrogeological data and knowledge often results in different propositions (or alternatives) to represent uncertain model components and creates many candidate groundwater models using the same data. Uncertainty of groundwater head prediction may become unnecessarily high. This study introduces an experimental design to identify propositions in each uncertain model component and decrease the prediction uncertainty by reducing conceptual model uncertainty. A discrimination criterion is developed based on posterior model probability that directly uses data to evaluate model importance. Bayesian model averaging (BMA) is used to predict future observation data. The experimental design aims to find the optimal number and location of future observations and the number of sampling rounds such that the desired discrimination criterion is met. Hierarchical Bayesian model averaging (HBMA) is adopted to assess if highly probable propositions can be identified and the conceptual model uncertainty can be reduced by the experimental design. The experimental design is implemented to a groundwater study in the Baton Rouge area, Louisiana. We design a new groundwater head observation network based on existing USGS observation wells. The sources of uncertainty that create multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. All possible design solutions are enumerated using a multi-core supercomputer. Several design solutions are found to achieve an 80%-identifiable groundwater model in 5 years by using six or more existing USGS wells. The HBMA result shows that each highly probable proposition can be identified for each uncertain model component once the discrimination criterion is achieved. The variances of groundwater head predictions are significantly decreased by reducing posterior model probabilities of unimportant propositions.

  13. Intermediate experimental vehicle, ESA program aerodynamics-aerothermodynamics key technologies for spacecraft design and successful flight

    NASA Astrophysics Data System (ADS)

    Dutheil, Sylvain; Pibarot, Julien; Tran, Dac; Vallee, Jean-Jacques; Tribot, Jean-Pierre

    2016-07-01

    With the aim of placing Europe among the world's space players in the strategic area of atmospheric re-entry, several studies on experimental vehicle concepts and improvements of critical re-entry technologies have paved the way for the flight of an experimental space craft. The successful flight of the Intermediate eXperimental Vehicle (IXV), under ESA's Future Launchers Preparatory Programme (FLPP), is definitively a significant step forward from the Atmospheric Reentry Demonstrator flight (1998), establishing Europe as a key player in this field. The IXV project objectives were the design, development, manufacture and ground and flight verification of an autonomous European lifting and aerodynamically controlled reentry system, which is highly flexible and maneuverable. The paper presents, the role of aerodynamics aerothermodynamics as part of the key technologies for designing an atmospheric re-entry spacecraft and securing a successful flight.

  14. Design and structural verification of locomotive bogies using combined analytical and experimental methods

    NASA Astrophysics Data System (ADS)

    Manea, I.; Popa, G.; Girnita, I.; Prenta, G.

    2015-11-01

    The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.

  15. Experimental system design for the integration of trapped-ion and superconducting qubit systems

    NASA Astrophysics Data System (ADS)

    De Motte, D.; Grounds, A. R.; Rehák, M.; Rodriguez Blanco, A.; Lekitsch, B.; Giri, G. S.; Neilinger, P.; Oelsner, G.; Il'ichev, E.; Grajcar, M.; Hensinger, W. K.

    2016-12-01

    We present a design for the experimental integration of ion trapping and superconducting qubit systems as a step towards the realization of a quantum hybrid system. The scheme addresses two key difficulties in realizing such a system: a combined microfabricated ion trap and superconducting qubit architecture, and the experimental infrastructure to facilitate both technologies. Developing upon work by Kielpinski et al. (Phys Rev Lett 108(13):130504, 2012. doi: 10.1103/PhysRevLett.108.130504), we describe the design, simulation and fabrication process for a microfabricated ion trap capable of coupling an ion to a superconducting microwave LC circuit with a coupling strength in the tens of kHz. We also describe existing difficulties in combining the experimental infrastructure of an ion trapping set-up into a dilution refrigerator with superconducting qubits and present solutions that can be immediately implemented using current technology.

  16. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak

    SciTech Connect

    Huang, J. Wan, B.; Hu, L.; Hu, C.; Heidbrink, W. W.; Zhu, Y.; Hellermann, M. G. von; Gao, W.; Wu, C.; Li, Y.; Fu, J.; Lyu, B.; Yu, Y.; Ye, M.; Shi, Y.

    2014-11-15

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  17. A multi-purpose SAIL demonstrator design and its principle experimental verification

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Yan, Aimin; Xu, Nan; Wang, Lijuan; Luan, Zhu; Sun, Jianfeng; Liu, Liren

    2009-08-01

    A fully 2-D synthetic aperture imaging ladar (SAIL) demonstrator is designed and being fabricated to experimentally investigate and theoretically analyze the beam diffraction properties, antenna function, imaging resolution and signal processing algorithm of SAIL. The design details of the multi-purpose SAIL demonstrator are given and, as the first phase, a laboratory-scaled SAIL system based on bulk optical elements has been built to verify the principle of design, which is similar in construction to the demonstrator but without the major antenna telescope. The system has the aperture diameter of about 1mm and the target distance of 3.2m.

  18. Marginal biotin deficiency can be induced experimentally in humans using a cost-effective outpatient design.

    PubMed

    Stratton, Shawna L; Henrich, Cindy L; Matthews, Nell I; Bogusiewicz, Anna; Dawson, Amanda M; Horvath, Thomas D; Owen, Suzanne N; Boysen, Gunnar; Moran, Jeffery H; Mock, Donald M

    2012-01-01

    To date, marginal, asymptomatic biotin deficiency has been successfully induced experimentally by the use of labor-intensive inpatient designs requiring rigorous dietary control. We sought to determine if marginal biotin deficiency could be induced in humans in a less expensive outpatient design incorporating a self-selected, mixed general diet. We sought to examine the efficacy of three outpatient study designs: two based on oral avidin dosing and one based on a diet high in undenatured egg white for a period of 28 d. In study design 1, participants (n = 4; 3 women) received avidin in capsules with a biotin binding capacity of 7 times the estimated dietary biotin intake of a typical self-selected diet. In study design 2, participants (n = 2; 2 women) received double the amount of avidin capsules (14 times the estimated dietary biotin intake). In study design 3, participants (n = 5; 3 women) consumed egg-white beverages containing avidin with a biotin binding capacity of 7 times the estimated dietary biotin intake. Established indices of biotin status [lymphocyte propionyl-CoA carboxylase activity; urinary excretion of 3-hydroxyisovaleric acid, 3-hydroxyisovaleryl carnitine (3HIA-carnitine), and biotin; and plasma concentration of 3HIA-carnitine] indicated that study designs 1 and 2 were not effective in inducing marginal biotin deficiency, but study design 3 was as effective as previous inpatient study designs that induced deficiency by egg-white beverage. Marginal biotin deficiency can be induced experimentally by using a cost-effective outpatient design by avidin delivery in egg-white beverages. This design should be useful to the broader nutritional research community.

  19. Designing specific protein-protein interactions using computation, experimental library screening, or integrated methods.

    PubMed

    Chen, T Scott; Keating, Amy E

    2012-07-01

    Given the importance of protein-protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity.

  20. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    NASA Astrophysics Data System (ADS)

    Girault, Isabelle; d'Ham, Cédric

    2014-08-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a computer environment (copex-chimie) with embedded scaffolds in order to help students to design an experimental procedure. A pre-structuring of the procedure where the students have to choose the actions of their procedure among pre-defined actions and specify the parameters forces the students to face the complexity of the design. However, this is not sufficient for them to succeed; they look for some feedback to improve their procedure and finally abandon their task. In another condition, the students were provided with individualized feedbacks on the errors detected in their procedures by an artificial tutor. These feedbacks proved to be necessary to accompany the students throughout their experimental design without being discouraged. With this kind of scaffold, students worked longer and succeeded better to the task than all the other students.

  1. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    NASA Astrophysics Data System (ADS)

    Christiansen, Rasmus E.; Sigmund, Ole

    2016-09-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.

  2. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  3. Fertilizer Response Curves for Commercial Southern Forest Species Defined with an Un-Replicated Experimental Design.

    SciTech Connect

    Coleman, Mark; Aubrey, Doug; Coyle, David, R.; Daniels, Richard, F.

    2005-11-01

    There has been recent interest in use of non-replicated regression experimental designs in forestry, as the need for replication in experimental design is burdensome on limited research budgets. We wanted to determine the interacting effects of soil moisture and nutrient availability on the production of various southeastern forest trees (two clones of Populus deltoides, open pollinated Platanus occidentalis, Liquidambar styraciflua and Pinus taeda). Additionally, we required an understanding of the fertilizer response curve. To accomplish both objectives we developed a composite design that includes a core ANOVA approach to consider treatment interactions, with the addition of non-replicated regression plots receiving a range of fertilizer levels for the primary irrigation treatment.

  4. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    SciTech Connect

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also, the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.

  5. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  6. Study and design of cryogenic propellant acquisition systems. Volume 2: Supporting experimental program

    NASA Technical Reports Server (NTRS)

    Burge, G. W.; Blackmon, J. B.

    1973-01-01

    Areas of cryogenic fuel systems were identified where critical experimental information was needed either to define a design criteria or to establish the feasibility of a design concept or a critical aspect of a particular design. Such data requirements fell into three broad categories: (1) basic surface tension screen characteristics; (2) screen acquisition device fabrication problems; and (3) screen surface tension device operational failure modes. To explore these problems and to establish design criteria where possible, extensive laboratory or bench test scale experiments were conducted. In general, these proved to be quite successful and, in many instances, the test results were directly used in the system design analyses and development. In some cases, particularly those relating to operational-type problems, areas requiring future research were identified, especially screen heat transfer and vibrational effects.

  7. A method of fast, sequential experimental design for linearized geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Coles, Darrell A.; Morgan, Frank Dale

    2009-07-01

    An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.

  8. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  9. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  10. Optical design and multi-objective optimization with fuzzy method for miniature zoom optics

    NASA Astrophysics Data System (ADS)

    Sun, Jung-Hung; Hsueh, Bo-Ren

    2011-07-01

    In this paper a thinning L-type zoom lens design was proposed to exploit the reflecting and refracting surfaces connected by a prism. However, in the L-type designs, the modulation transfer function (MTF) value is comparatively low compared with that in its coaxial counterparts. If we increase the MTF, that would cause the relative illuminace (RI) degradation. We propose a combination of the Taguchi method and fuzzy approach to improve both the RI and MTF in L-type zoom systems. The resulting experimental values of orthogonal array L9 of the Taguchi method were used as inputs for fuzzy approach to obtain the MPCI value. The MPCI value was then analyzed by variance, revealing that the two most significant factors were (1) the surface 7 to image length and (2) the semi-aperture of the front element. In our proposed method, the appropriate weight of MTF and RI for the inputs of fuzzy controllers increased the MTF by 3.74%, but the RI only reduced by 0.13% in the systematic wide-angle end, respectively.

  11. 2-[(Hydroxymethyl)amino]ethanol in water as a preservative: Study of formaldehyde released by Taguchi's method

    NASA Astrophysics Data System (ADS)

    Wisessirikul, W.; Loykulnant, S.; Montha, S.; Fhulua, T.; Prapainainar, P.

    2016-06-01

    This research studied the quantity of free formaldehyde released from 2- [(hydroxymethyl)amino]ethanol (HAE) in DI water and natural rubber latex mixture using high-performance liquid chromatography (HPLC) technique. The quantity of formaldehyde retained in the solution was cross-checked by using titration technique. The investigated factors were the concentration of preservative (HAE), pH, and temperature. Taguchi's method was used to design the experiments. The number of experiments was reduced to 16 experiments from all possible experiments by orthogonal arrays (3 factors and 4 levels in each factor). Minitab program was used as a tool for statistical calculation and for finding the suitable condition for the preservative system. HPLC studies showed that higher temperature and higher concentration of the preservative influence the amount of formaldehyde released. It was found that conditions at which formaldehyde was released in the lowest amount were 1.6%w/v HAE, 4 to 40 °C, and the original pH. Nevertheless, the pH value of NR latex should be more than 10 (the suitable pH value was found to be 13). This preservative can be used to replace current preservative systems and can maintain the quality of latex for long-term storage. Use of the proposed preservative system was also shown to have reduced impact on the toxicity of the environment.

  12. Aspects of experimental design for plant metabolomics experiments and guidelines for growth of plant material.

    PubMed

    Gibon, Yves; Rolin, Dominique

    2012-01-01

    Experiments involve the deliberate variation of one or more factors in order to provoke responses, the identification of which then provides the first step towards functional knowledge. Because environmental, biological, and/or technical noise is unavoidable, biological experiments usually need to be designed. Thus, once the major sources of experimental noise have been identified, individual samples can be grouped, randomised, and/or pooled. Like other 'omics approaches, metabolomics is characterised by the numbers of analytes largely exceeding sample number. While this unprecedented singularity in biology dramatically increases false discovery, experimental error can nevertheless be decreased in plant metabolomics experiments. For this, each step from plant cultivation to data acquisition needs to be evaluated in order to identify the major sources of error and then an appropriate design can be produced, as with any other experimental approach. The choice of technology, the time at which tissues are harvested, and the way metabolism is quenched also need to be taken into consideration, as they decide which metabolites can be studied. A further recommendation is to document data and metadata in a machine readable way. The latter should also describe every aspect of the experiment. This should provide valuable hints for future experimental design and ultimately give metabolomic data a second life. To facilitate the identification of critical steps, a list of items to be considered before embarking on time-consuming and costly metabolomic experiments is proposed.

  13. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  14. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  15. RNA-seq Data: Challenges in and Recommendations for Experimental Design and Analysis

    PubMed Central

    Williams, Alexander G.; Thomas, Sean; Wyman, Stacia K.; Holloway, Alisha K.

    2014-01-01

    RNA-seq is widely used to determine differential expression of genes or transcripts as well as identify novel transcripts, identify allele-specific expression, and precisely measure translation of transcripts. Thoughtful experimental design and choice of analysis tools are critical to ensure high quality data and interpretable results. Important considerations for experimental design include number of replicates, whether to collect paired-end or single-end reads, sequence length, and sequencing depth. Common analysis steps in all RNA-seq experiments include quality control, read alignment, assigning reads to genes or transcripts, and estimating gene or transcript abundance. Our aims are two-fold: to make recommendations for common components of experimental design and assess tool capabilities for each of these steps. We also test tools designed to detect differential expression since this is the most widespread use of RNA-seq. We hope these analyses will help guide those who are new to RNA-seq and will generate discussion about remaining needs for tool improvement and development. PMID:25271838

  16. A Bayesian active learning strategy for sequential experimental design in systems biology.

    PubMed

    Pauwels, Edouard; Lajaunie, Christian; Vert, Jean-Philippe

    2014-09-26

    BackgroundDynamical models used in systems biology involve unknown kinetic parameters. Setting these parameters is a bottleneck in many modeling projects. This motivates the estimation of these parameters from empirical data. However, this estimation problem has its own difficulties, the most important one being strong ill-conditionedness. In this context, optimizing experiments to be conducted in order to better estimate a system¿s parameters provides a promising direction to alleviate the difficulty of the task.ResultsBorrowing ideas from Bayesian experimental design and active learning, we propose a new strategy for optimal experimental design in the context of kinetic parameter estimation in systems biology. We describe algorithmic choices that allow to implement this method in a computationally tractable way and make it fully automatic. Based on simulation, we show that it outperforms alternative baseline strategies, and demonstrate the benefit to consider multiple posterior modes of the likelihood landscape, as opposed to traditional schemes based on local and Gaussian approximations.ConclusionThis analysis demonstrates that our new, fully automatic Bayesian optimal experimental design strategy has the potential to support the design of experiments for kinetic parameter estimation in systems biology.

  17. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  18. Intuitive web-based experimental design for high-throughput biomedical data.

    PubMed

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  19. Design and experimental results for a flapped natural-laminar-flow airfoil for general aviation applications

    NASA Technical Reports Server (NTRS)

    Somers, D. M.

    1981-01-01

    A flapped natural laminar flow airfoil for general aviation applications, the NLF(1)-0215F, has been designed and analyzed theoretically and verified experimentally in the Langley Low Turbulence Pressure Tunnel. The basic objective of combining the high maximum lift of the NASA low speed airfoils with the low cruise drag of the NACA 6 series airfoils has been achieved. The safety requirement that the maximum lift coefficient not be significantly affected with transition fixed near the leading edge has also been met. Comparisons of the theoretical and experimental results show generally good agreement.

  20. Design and Experimental Results for a Natural-Laminar-Flow Airfoil for General Aviation Applications

    NASA Technical Reports Server (NTRS)

    Somers, D. M.

    1981-01-01

    A natural-laminar-flow airfoil for general aviation applications, the NLF(1)-0416, was designed and analyzed theoretically and verified experimentally in the Langley Low-Turbulence Pressure Tunnel. The basic objective of combining the high maximum lift of the NASA low-speed airfoils with the low cruise drag of the NACA 6-series airfoils was achieved. The safety requirement that the maximum lift coefficient not be significantly affected with transition fixed near the leading edge was also met. Comparisons of the theoretical and experimental results show excellent agreement. Comparisons with other airfoils, both laminar flow and turbulent flow, confirm the achievement of the basic objective.

  1. Analytical and experimental investigation of liquid double drop dynamics: Preliminary design for space shuttle experiments

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The preliminary grant assessed the use of laboratory experiments for simulating low g liquid drop experiments in the space shuttle environment. Investigations were begun of appropriate immiscible liquid systems, design of experimental apparatus and analyses. The current grant continued these topics, completed construction and preliminary testing of the experimental apparatus, and performed experiments on single and compound liquid drops. A continuing assessment of laboratory capabilities, and the interests of project personnel and available collaborators, led to, after consultations with NASA personnel, a research emphasis specializing on compound drops consisting of hollow plastic or elastic spheroids filled with liquids.

  2. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  3. Engineering at SLAC: Designing and constructing experimental devices for the Stanford Synchrotron Radiation Lightsource - Final Paper

    SciTech Connect

    Djang, Austin

    2015-08-22

    Thanks to the versatility of the beam lines at SSRL, research there is varied and benefits multiple fields. Each experiment requires a particular set of experiment equipment, which in turns requires its own particular assembly. As such, new engineering challenges arise from each new experiment. My role as an engineering intern has been to help solve these challenges, by designing and assembling experimental devices. My first project was to design a heated sample holder, which will be used to investigate the effect of temperature on a sample's x-ray diffraction pattern. My second project was to help set up an imaging test, which involved designing a cooled grating holder and assembling multiple positioning stages. My third project was designing a 3D-printed pencil holder for the SSRL workstations.

  4. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W. W.

    2004-01-01

    Excess in skin material of a pneumatic envelope beyond what is required for minimum enclosure of a gas bubble is a necessary but by no means sufficient condition for the existence of multiple equilibrium configurations for that pneumatic envelope. The very design of structurally efficient super-pressure balloons of the pumpkin shape type requires such excess. Undesired stable equilibria in pumpkin shape balloons have been observed on experimental pumpkin shape balloons. These configurations contain regions with stress levels far higher than those predicted for the cyclically symmetric design configuration under maximum pressurization. Successful designs of pumpkin shape super-pressure balloons do not allow such undesired stable equilibria under full pressurization. This work documents efforts made so far and describes efforts still underway by the National Aeronautics and Space Administration's Balloon Program Office to arrive on guidance on the design of pumpkin shape super-pressure balloons that guarantee full and proper deployment.

  5. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  6. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  7. An experimental investigation of two 15 percent-scale wind tunnel fan-blade designs

    NASA Technical Reports Server (NTRS)

    Signor, David B.

    1988-01-01

    An experimental 3-D investigation of two fan-blade designs was conducted. The fan blades tested were 15 percent-scale models of blades to be used in the fan drive of the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. NACA 65- and modified NACA 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half-chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increased stall margin when they were compared with the unmodified blades.

  8. Designation and Implementation of Microcomputer Principle and Interface Technology Virtual Experimental Platform Website

    NASA Astrophysics Data System (ADS)

    Gao, JinYue; Tang, Yin

    This paper explicitly discusses the designation and implementation thought and method of Microcomputer Principle and Interface Technology virtual experimental platform website construction. The instructional design of this platform mainly follows with the students-oriented constructivism learning theory, and the overall structure is subject to the features of teaching aims, teaching contents and interactive methods. Virtual experiment platform production and development should fully take the characteristics of network operation into consideration and adopt relevant technologies to improve the effect and speed of network software application in internet.

  9. Active vibration absorber for the CSI evolutionary model - Design and experimental results. [Controls Structures Interaction

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstrations to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility has been developed to study practical implementation of new control technologies under realistic conditions. The paper discusses the design of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. Experimental results in the presence of these factors are presented and discussed. The robustness of this design under model uncertainty is demonstrated.

  10. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  11. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  12. Theoretical and Experimental Investigation of Mufflers with Comments on Engine-Exhaust Muffler Design

    NASA Technical Reports Server (NTRS)

    Davis, Don D , Jr; Stokes, George M; Moore, Dewey; Stevens, George L , Jr

    1954-01-01

    Equations are presented for the attenuation characteristics of single-chamber and multiple-chamber mufflers of both the expansion-chamber and resonator types, for tuned side-branch tubes, and for the combination of an expansion chamber with a resonator. Experimental curves of attenuation plotted against frequency are presented for 77 different mufflers with a reflection-free tailpipe termination. The experiments were made at room temperature without flow; the sound source was a loud-speaker. A method is given for including the tailpipe reflections in the calculations. Experimental attenuation curves are presented for four different muffler-tailpipe combinations, and the results are compared with the theory. The application of the theory to the design of engine-exhaust mufflers is discussed, and charts are included for the assistance of the designer.

  13. Design and Experimental Results for the S825 Airfoil; Period of Performance: 1998-1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 17%-thick, natural-laminar-flow airfoil, the S825, for the 75% blade radial station of 20- to 40-meter, variable-speed and variable-pitch (toward feather), horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The two primary objectives of high maximum lift, relatively insensitive to roughness and low-profile drag have been achieved. The airfoil exhibits a rapid, trailing-edge stall, which does not meet the design goal of a docile stall. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement.

  14. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  15. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    NASA Technical Reports Server (NTRS)

    Frickland, P. O.; Repar, J.

    1982-01-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  16. Development of a Finite State Machine for a Small Unmanned Aircraft System Using Experimental Design

    DTIC Science & Technology

    2015-03-26

    Figure 2: Simple Finite State Machine Example 2.4 APM:Plane Firmware Parameters The APM:Plane firmware has more than 300 configurable parameters...DEVELOPMENT OF A FINITE STATE MACHINE FOR A SMALL UNMANNED AIRCRAFT SYSTEM USING EXPERIMENTAL DESIGN...protection in the United States. AFIT-ENS-MS-15-M-146 DEVELOPMENT OF A FINITE STATE MACHINE FOR A SMALL UNMANNED AIRCRAFT SYSTEM USING

  17. Survey of the quality of experimental design, statistical analysis and reporting of research using animals.

    PubMed

    Kilkenny, Carol; Parsons, Nick; Kadyszewski, Ed; Festing, Michael F W; Cuthill, Innes C; Fry, Derek; Hutton, Jane; Altman, Douglas G

    2009-11-30

    For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and

  18. Experimental design and analysis for accelerated degradation tests with Li-ion cells.

    SciTech Connect

    Doughty, Daniel Harvey; Thomas, Edward Victor; Jungst, Rudolph George; Roth, Emanuel Peter

    2003-08-01

    This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

  19. Life on rock. Scaling down biological weathering in a new experimental design at Biosphere-2

    NASA Astrophysics Data System (ADS)

    Zaharescu, D. G.; Dontsova, K.; Burghelea, C. I.; Chorover, J.; Maier, R.; Perdrial, J. N.

    2012-12-01

    Biological colonization and weathering of bedrock on Earth is a major driver of landscape and ecosystem development, its effects reaching out into other major systems such climate and geochemical cycles of elements. In order to understand how microbe-plant-mycorrhizae communities interact with bedrock in the first phases of mineral weathering we developed a novel experimental design in the Desert Biome at Biosphere-2, University of Arizona (U.S.A). This presentation will focus on the development of the experimental setup. Briefly, six enclosed modules were designed to hold 288 experimental columns that will accommodate 4 rock types and 6 biological treatments. Each module is developed on 3 levels. A lower volume, able to withstand the weight of both, rock material and the rest of the structure, accommodates the sampling elements. A middle volume, houses the experimental columns in a dark chamber. A clear, upper section forms the habitat exposed to sunlight. This volume is completely sealed form exterior and it allows a complete control of its air and water parameters. All modules are connected in parallel with a double air purification system that delivers a permanent air flow. This setup is expected to provide a model experiment, able to test important processes in the interaction rock-life at grain-to- molecular scale.

  20. A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

    SciTech Connect

    Rutherford, Brian

    1999-08-04

    The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.

  1. A computer program for enzyme kinetics that combines model discrimination, parameter refinement and sequential experimental design.

    PubMed Central

    Franco, R; Gavaldà, M T; Canela, E I

    1986-01-01

    A method of model discrimination and parameter estimation in enzyme kinetics is proposed. The experimental design and analysis of the model are carried out simultaneously and the stopping rule for experimentation is deduced by the experimenter when the probabilities a posteriori indicate that one model is clearly superior to the rest. A FORTRAN77 program specifically developed for joint designs is given. The method is very powerful, as indicated by its usefulness in the discrimination between models. For example, it has been successfully applied to three cases of enzyme kinetics (a single-substrate Michaelian reaction with product inhibition, a single-substrate complex reaction and a two-substrate reaction). By using this method the most probable model and the estimates of the parameters can be obtained in one experimental session. The FORTRAN77 program is deposited as Supplementary Publication SUP 50134 (19 pages) at the British Library (Lending Division), Boston Spa, Wetherby, West Yorkshire LS23 7BQ, U.K., from whom copies can be obtained on the terms indicated in Biochem. J. (1986) 233, 5. PMID:3800965

  2. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  3. Numerical and experimental hydrodynamic analysis of suction cup bio-logging tag designs for marine mammals

    NASA Astrophysics Data System (ADS)

    Murray, Mark; Shorter, Alex; Howle, Laurens; Johnson, Mark; Moore, Michael

    2012-11-01

    The improvement and miniaturization of sensing technologies has made bio-logging tags, utilized for the study of marine mammal behavior, more practical. These sophisticated sensing packages require a housing which protects the electronics from the environment and provides a means of attachment to the animal. The hydrodynamic forces on these housings can inadvertently remove the tag or adversely affect the behavior or energetics of the animal. A modification to the original design of a suction cup bio-logging tag housing was desired to minimize the adverse forces. In this work, hydrodynamic loading of two suction cup tag designs, original and modified designs, were analyzed using computational fluid dynamics (CFD) models and validated experimentally. Overall, the simulation and experimental results demonstrated that a tag housing that minimized geometric disruptions to the flow reduced drag forces, and that a tag housing with a small frontal cross-sectional area close to the attachment surface reduced lift forces. Preliminary results from experimental work with a common dolphin cadaver indicates that the suction cups used to attach the tags to the animal provide sufficient attachment force to resist failure at predicted drag and lift forces in 10 m/s flow.

  4. Experimental Design for Groundwater Pumping Estimation Using a Genetic Algorithm (GA) and Proper Orthogonal Decomposition (POD)

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Cheng, W.; Yeh, W. W.

    2010-12-01

    This study optimizes observation well locations and sampling frequencies for the purpose of estimating unknown groundwater extraction in an aquifer system. Proper orthogonal decomposition (POD) is used to reduce the groundwater flow model, thus reducing the computation burden and data storage space associated with solving this problem for heavily discretized models. This reduced model can store a significant amount of system information in a much smaller reduced state vector. Along with the sensitivity equation method, the proposed approach can efficiently compute the Jacobian matrix that forms the information matrix associated with the experimental design. The criterion adopted for experimental design is the maximization of the trace of the weighted information matrix. Under certain conditions, this is equivalent to the classical A-optimality criterion established in experimental design. A genetic algorithm (GA) is used to optimize the observation well locations and sampling frequencies for maximizing the collected information from the hydraulic head sampling at the observation wells. We applied the proposed approach to a hypothetical 30,000-node groundwater aquifer system. We studied the relationship among the number of observation wells, observation well locations, sampling frequencies, and the collected information for estimating unknown groundwater extraction.

  5. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  6. Experimental investigation of damage behavior of RC frame members including non-seismically designed columns

    NASA Astrophysics Data System (ADS)

    Chen, Linzhi; Lu, Xilin; Jiang, Huanjun; Zheng, Jianbo

    2009-06-01

    Reinforced concrete (RC) frame structures are one of the mostly common used structural systems, and their seismic performance is largely determined by the performance of columns and beams. This paper describes horizontal cyclic loading tests of ten column and three beam specimens, some of which were designed according to the current seismic design code and others were designed according to the early non-seismic Chinese design code, aiming at reporting the behavior of the damaged or collapsed RC frame strctures observed during the Wenchuan earthquake. The effects of axial load ratio, shear span ratio, and transverse and longitudinal reinforcement ratio on hysteresis behavior, ductility and damage progress were incorporated in the experimental study. Test results indicate that the non-seismically designed columns show premature shear failure, and yield larger maximum residual crack widths and more concrete spalling than the seismically designed columns. In addition, longitudinal steel reinforcement rebars were severely buckled. The axial load ratio and shear span ratio proved to be the most important factors affecting the ductility, crack opening width and closing ability, while the longitudinal reinforcement ratio had only a minor effect on column ductility, but exhibited more influence on beam ductility. Finally, the transverse reinforcement ratio did not influence the maximum residual crack width and closing ability of the seismically designed columns.

  7. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  8. Beyond the bucket: testing the effect of experimental design on rate and sequence of decay

    NASA Astrophysics Data System (ADS)

    Gabbott, Sarah; Murdock, Duncan; Purnell, Mark

    2016-04-01

    Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet.

  9. Advanced computational tools for PEM fuel cell design. Part 2. Detailed experimental validation and parametric study

    NASA Astrophysics Data System (ADS)

    Sui, P. C.; Kumar, S.; Djilali, N.

    This paper reports on the systematic experimental validation of a comprehensive 3D CFD-based computational model presented and documented in Part 1. Simulations for unit cells with straight channels, similar to the Ballard Mk902 hardware, are performed and analyzed in conjunction with detailed current mapping measurements and water mass distributions in the membrane-electrode assembly. The experiments were designed to display sensitivity of the cell over a range of operating parameters including current density, humidification, and coolant temperature, making the data particularly well suited for systematic validation. Based on the validation and analysis of the predictions, values of model parameters, including the electro-osmotic drag coefficient, capillary diffusion coefficient, and catalyst specific surface area are determined adjusted to fit experimental data of current density and MEA water content. The predicted net water flux out of the anode (normalized by the total water generated) increases as anode humidification water flow rate is increased, in agreement with experimental results. A modification of the constitutive equation for the capillary diffusivity of water in the porous electrodes that attempts to incorporate the experimentally observed immobile (or irreducible) saturation yields a better fit of the predicted MEA water mass with experimental data. The specific surface area parameter used in the catalyst layer model is found to be effective in tuning the simulations to predict the correct cell voltage over a range of stoichiometries.

  10. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  11. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  12. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  13. Two-dimensional dielectric collimator design and its experimental verification for microwave beam focusing

    NASA Astrophysics Data System (ADS)

    Kim, H.; Park, J.; Seo, I.; Yoo, J.

    2016-10-01

    A collimator is an electromagnetic device that focuses or aligns the direction of wave propagation to achieve a narrow, intense beam. In this study, we propose a two-dimensional dielectric collimator for microwave beam focusing. This is something that is difficult to achieve using theoretical- or intuition-based approaches. We therefore used a systematic design process, which is referred to as the phase field design method, to obtain an optimal topological configuration for the collimator. The phase field parameter determines the optimal configuration of the dielectric material and, as a consequence, it determines the relative permittivity of the component. To verify the design results, we fabricated a prototype via three-dimensional printing and performed an experimental verification using an electric field scanner to measure the near field distributions of the designed collimator positioned parallel to an incident wave. We also performed angle dependent experiments for which the collimator position was offset at various angles. We confirmed that the experimental results are consistent with the simulation results.

  14. Experimental design of an optimal phase duration control strategy used in batch biological wastewater treatment.

    PubMed

    Pavgelj, N B; Hvala, N; Kocijan, J; Ros, M; Subelj, M; Music, G; Strmcnik, S

    2001-01-01

    The paper presents the design of an algorithm used in control of a sequencing batch reactor (SBR) for wastewater treatment. The algorithm is used for the on-line optimization of the batch phases duration which should be applied due to the variable input wastewater. Compared to an operation with fixed times of batch phases, this kind of a control strategy improves the treatment quality and reduces energy consumption. The designed control algorithm is based on following the course of some simple indirect process variables (i.e. redox potential, dissolved oxygen concentration and pH), and automatic recognition of the characteristic patterns in their time profile. The algorithm acts on filtered on-line signals and is based on heuristic rules. The control strategy was developed and tested on a laboratory pilot plant. To facilitate the experimentation, the pilot plant was superimposed by a computer-supported experimental environment that enabled: (i) easy access to all data (on-line signals, laboratory measurements, batch parameters) needed for the design of the algorithm, (ii) the immediate application of the algorithm designed off-line in the Matlab package also in real-time control. When testing on the pilot plant, the control strategy demonstrated good agreement between the proposed completion times and actual terminations of the desired biodegradation processes.

  15. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  16. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.

  17. Design and performance of a piezoelectric actuated precise rotary positioner

    NASA Astrophysics Data System (ADS)

    Wang, Y. C.; Chang, S. H.

    2006-10-01

    Industries including semiconductor, biotechnology, and nanotechnology are seeking compact and reliable nanometer resolution positioning techniques. To address this demand, this article presents a friction-drive rotary stage driven by a piezoelectric transducer (PZT) actuator. This stage includes a multilayer PZT actuator, the Scott-Russell mechanism, an actuation stage, a preload spring, and an output shaft. Its rotary positioning is accomplished by the stick-slip effect between the wire electrodischarge-machining rotary stage and the output shaft. Finite element analysis and Taguchi optimization method were extensively conducted to analyze the displacement, stress, and vibration behavior for optimum design. As shown by the experimental results, the stage achieved a resolution of 0.13μrad and a speed of 0.15°/h by tuning of the preload spring.

  18. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2016-06-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  19. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    PubMed

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases.

  20. The Langley Research Center CSI phase-0 evolutionary model testbed-design and experimental results

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Horta, Lucas G.; Elliott, K. B.

    1991-01-01

    A testbed for the development of Controls Structures Interaction (CSI) technology is described. The design philosophy, capabilities, and early experimental results are presented to introduce some of the ongoing CSI research at NASA-Langley. The testbed, referred to as the Phase 0 version of the CSI Evolutionary model (CEM), is the first stage of model complexity designed to show the benefits of CSI technology and to identify weaknesses in current capabilities. Early closed loop test results have shown non-model based controllers can provide an order of magnitude increase in damping in the first few flexible vibration modes. Model based controllers for higher performance will need to be robust to model uncertainty as verified by System ID tests. Data are presented that show finite element model predictions of frequency differ from those obtained from tests. Plans are also presented for evolution of the CEM to study integrated controller and structure design as well as multiple payload dynamics.

  1. Active vibration absorber for CSI evolutionary model: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstration to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility was developed to study practical implementation of new control technologies under realistic conditions. The design is discussed of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. The primary performance objective considered is damping augmentation of the first nine structural modes. Comparison of experimental and predicted closed loop damping is presented, including test and simulation time histories for open and closed loop cases. Although the simulation and test results are not in full agreement, robustness of this design under model uncertainty is demonstrated. The basic advantage of this second order controller design is that the stability of the controller is model independent.

  2. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  3. Using the Taguchi method to obtain more finesse to the biodegradable fibers.

    PubMed

    Ellä, Ville; Rajala, Anne; Tukiainen, Mikko; Kellomäki, Minna

    2012-01-01

    The Taguchi method together with Minitab software was used to optimize the melt spun PLLA multifilament fiber finesse. The aim was to minimize the number of spinning experiments to find optimal processing conditions and to maximize the quality of the fibers (thickness, strength, and smoothness). The optimization was performed in two parts. At first, the melt spinning process was optimized considering the drawing that followed and at second step the drawing was optimized. Fine (15 μm) fibers with feasible strength properties (730 MPa) for further processing were produced with the aid of Minitab software.

  4. Use of experimental data in testing methods for design against uncertainty

    NASA Astrophysics Data System (ADS)

    Rosca, Raluca Ioana

    Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.

  5. Experimental Validation of an Electromagnet Thermal Design Methodology for Magnetized Dusty Plasma Research

    NASA Astrophysics Data System (ADS)

    Birmingham, W. J.; Bates, E. M.; Romero-Talamás, C. A.; Rivera, W. F.

    2016-10-01

    An analytic thermal design method developed to aid in the engineering design of Bitter-type magnets, as well as finite element calculations of heat transfer, are compared against experimental measurements of temperature evolution in a prototype magnet designed to operate continuously at 1 T fields while dissipating 9 kW of heat. The analytic thermal design method is used to explore a variety of configurations of cooling holes in the Bitter plates, including their geometry and radial placement. The prototype has diagnostic ports that can accommodate thermocouples, pressure sensors, and optical access to measure the water flow. We present temperature and pressure sensor data from the prototype compared to the analytic thermal model and finite element calculations. The data is being used to guide the design of a 10 T Bitter magnet capable of sustained fields of up to 10 T for at least 10 seconds, which will be used in dusty plasma experiments at the University of Maryland Baltimore County. Preliminary design plans and progress towards the construction of the 10 T electromagnet are also presented.

  6. Design Considerations and Experimental Verification of a Rail Brake Armature Based on Linear Induction Motor Technology

    NASA Astrophysics Data System (ADS)

    Sakamoto, Yasuaki; Kashiwagi, Takayuki; Hasegawa, Hitoshi; Sasakawa, Takashi; Fujii, Nobuo

    This paper describes the design considerations and experimental verification of an LIM rail brake armature. In order to generate power and maximize the braking force density despite the limited area between the armature and the rail and the limited space available for installation, we studied a design method that is suitable for designing an LIM rail brake armature; we considered adoption of a ring winding structure. To examine the validity of the proposed design method, we developed a prototype ring winding armature for the rail brakes and examined its electromagnetic characteristics in a dynamic test system with roller rigs. By repeating various tests, we confirmed that unnecessary magnetic field components, which were expected to be present under high speed running condition or when a ring winding armature was used, were not present. Further, the necessary magnetic field component and braking force attained the desired values. These studies have helped us to develop a basic design method that is suitable for designing the LIM rail brake armatures.

  7. Facility for Advanced Accelerator Experimental Tests at SLAC (FACET) Conceptual Design Report

    SciTech Connect

    Amann, J.; Bane, K.; /SLAC

    2009-10-30

    This Conceptual Design Report (CDR) describes the design of FACET. It will be updated to stay current with the developing design of the facility. This CDR begins as the baseline conceptual design and will evolve into an 'as-built' manual for the completed facility. The Executive Summary, Chapter 1, gives an introduction to the FACET project and describes the salient features of its design. Chapter 2 gives an overview of FACET. It describes the general parameters of the machine and the basic approaches to implementation. The FACET project does not include the implementation of specific scientific experiments either for plasma wake-field acceleration for other applications. Nonetheless, enough work has been done to define potential experiments to assure that the facility can meet the requirements of the experimental community. Chapter 3, Scientific Case, describes the planned plasma wakefield and other experiments. Chapter 4, Technical Description of FACET, describes the parameters and design of all technical systems of FACET. FACET uses the first two thirds of the existing SLAC linac to accelerate the beam to about 20GeV, and compress it with the aid of two chicanes, located in Sector 10 and Sector 20. The Sector 20 area will include a focusing system, the generic experimental area and the beam dump. Chapter 5, Management of Scientific Program, describes the management of the scientific program at FACET. Chapter 6, Environment, Safety and Health and Quality Assurance, describes the existing programs at SLAC and their application to the FACET project. It includes a preliminary analysis of safety hazards and the planned mitigation. Chapter 7, Work Breakdown Structure, describes the structure used for developing the cost estimates, which will also be used to manage the project. The chapter defines the scope of work of each element down to level 3.

  8. β-galactosidase Production by Aspergillus niger ATCC 9142 Using Inexpensive Substrates in Solid-State Fermentation: Optimization by Orthogonal Arrays Design

    PubMed Central

    Kazemi, Samaneh; Khayati, Gholam; Faezi-Ghasemi, Mohammad

    2016-01-01

    Background: Enzymatic hydrolysis of lactose is one of the most important biotechnological processes in the food industry, which is accomplished by enzyme β-galactosidase (β-gal, β-D-galactoside galactohydrolase, EC 3.2.1.23), trivial called lactase. Orthogonal arrays design is an appropriate option for the optimization of biotechnological processes for the production of microbial enzymes. Methods: Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was employed to screen the most significant levels of parameters, including the solid substrates (wheat straw, rice straw, and peanut pod), the carbon/nitrogen (C/N) ratios, the incubation time, and the inducer. The level of β-gal production was measured by a photometric enzyme activity assay using the artificial substrate ortho-Nitrophenyl-β-D-galactopyranoside. Results: The results showed that C/N ratio (0.2% [w/v], incubation time (144 hour), and solid substrate (wheat straw) were the best conditions determined by the design of experiments using the Taguchi approach. Conclusion: Our finding showed that the use of rice straw and peanut pod, as solid-state substrates, led to 2.041-folds increase in the production of the enzyme, as compared to rice straw. In addition, the presence of an inducer did not have any significant impact on the enzyme production levels. PMID:27721510

  9. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  10. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  11. Pathobiology of aging mice and GEM: background strains and experimental design.

    PubMed

    Brayton, C F; Treuting, P M; Ward, J M

    2012-01-01

    The use of induced and spontaneous mutant mice and genetically engineered mice (and combinations thereof) to study cancers and other aging phenotypes to advance improved functional human life spans will involve studies of aging mice. Genetic background contributes to pathology phenotypes and to causes of death as well as to longevity. Increased recognition of expected phenotypes, experimental variables that influence phenotypes and research outcomes, and experimental design options and rationales can maximize the utility of genetically engineered mice (GEM) models to translational research on aging. This review aims to provide resources to enhance the design and practice of chronic and longevity studies involving GEM. C57BL6, 129, and FVB/N strains are emphasized because of their widespread use in the generation of knockout, transgenic, and conditional mutant GEM. Resources are included also for pathology of other inbred strain families, including A, AKR, BALB/c, C3H, C57L, C58, CBA, DBA, GR, NOD.scid, SAMP, and SJL/J, and non-inbred mice, including 4WC, AB6F1, Ames dwarf, B6, 129, B6C3F1, BALB/c,129, Het3, nude, SENCAR, and several Swiss stocks. Experimental strategies for long-term cross-sectional and longitudinal studies to assess causes of or contributors to death, disease burden, spectrum of pathology phenotypes, longevity, and functional healthy life spans (health spans) are compared and discussed.

  12. Design and Experimental Demonstration of Cherenkov Radiation Source Based on Metallic Photonic Crystal Slow Wave Structure

    NASA Astrophysics Data System (ADS)

    Fu, Tao; Yang, Zi-Qiang; Ouyang, Zheng-Biao

    2016-11-01

    This paper presents a kind of Cherenkov radiation source based on metallic photonic crystal (MPC) slow-wave structure (SWS) cavity. The Cherenkov source designed by linear theory works at 34.7 GHz when the cathode voltage is 550 kV. The three-dimensional particle-in-cell (PIC) simulation of the SWS shows the operating frequency of 35.56 GHz with a single TM01 mode is basically consistent with the theoretically one under the same parameters. An experiment was implemented to testify the results of theory and PIC simulation. The experimental system includes a cathode emitting unit, the SWS, a magnetic system, an output antenna, and detectors. Experimental results show that the operating frequency through detecting the retarded time of wave propagation in waveguides is around 35.5 GHz with a single TM01 mode and an output power reaching 54 MW. It indicates that the MPC structure can reduce mode competition. The purpose of the paper is to show in theory and in preliminary experiment that a SWS with PBG can produce microwaves in TM01 mode. But it still provides a good experimental and theoretical foundation for designing high-power microwave devices.

  13. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  14. Experimental characterization and multidisciplinary conceptual design optimization of a bendable load stiffened unmanned air vehicle wing

    NASA Astrophysics Data System (ADS)

    Jagdale, Vijay Narayan

    Demand for deployable MAVs and UAVs with wings designed to reduce aircraft storage volume led to the development of a bendable wing concept at the University of Florida (UF). The wing shows an ability to load stiffen in the flight load direction, still remaining compliant in the opposite direction, enabling UAV storage inside smaller packing volumes. From the design prospective, when the wing shape parameters are treated as design variables, the performance requirements : high aerodynamic efficiency, structural stability under aggressive flight loads and desired compliant nature to prevent breaking while stored, in general conflict with each other. Creep deformation induced by long term storage and its effect on the wing flight characteristics are additional considerations. Experimental characterization of candidate bendable UAV wings is performed in order to demonstrate and understand aerodynamic and structural behavior of the bendable load stiffened wing under flight loads and while the wings are stored inside a canister for long duration, in the process identifying some important wing shape parameters. A multidisciplinary, multiobjective design optimization approach is utilized for conceptual design of a 24 inch span and 7 inch root chord bendable wing. Aerodynamic performance of the wing is studied using an extended vortex lattice method based Athena Vortex Lattice (AVL) program. An arc length method based nonlinear FEA routine in ABAQUS is used to evaluate the structural performance of the wing and to determine maximum flying velocity that the wing can withstand without buckling or failing under aggressive flight loads. An analytical approach is used to study the stresses developed in the composite wing during storage and Tsai-Wu criterion is used to check failure of the composite wing due to the rolling stresses to determine minimum safe storage diameter. Multidisciplinary wing shape and layup optimization is performed using an elitist non-dominated sorting

  15. Experimental Evaluation of the Failure of a Seismic Design Category - B Precast Concrete Beam-Column Connection System

    DTIC Science & Technology

    2014-12-01

    ER D C TR -1 4 -1 2 Experimental Evaluation of the Failure of a Seismic Design Category – B Precast Concrete Beam-Column Connection...ERDC TR-14-12 December 2014 Experimental Evaluation of the Failure of a Seismic Design Category – B Precast Concrete Beam-Column Connection...experiment to test a precast concrete beam-column system to failure. This experiment was designed to evaluate the performance of precast frame

  16. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    NASA Astrophysics Data System (ADS)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  17. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    PubMed Central

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  18. Experimental research of the synthetic jet generator designs based on actuation of diaphragm with piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Rimasauskiene, R.; Matejka, M.; Ostachowicz, W.; Kurowski, M.; Malinowski, P.; Wandowski, T.; Rimasauskas, M.

    2015-01-01

    Experimental analyses of four own developed synthetic jet generator designs were presented in this paper. The main task of this work was to find the most appropriate design of the synthetic jet generator. Dynamic characteristics of the synthetic jet generator's diaphragm with piezoelectric material were measured using non-contact measuring equipment laser vibrometer Polytec®PSV 400. Temperatures of the piezoelectric diaphragms working in resonance frequency were measured with Fiber Bragg Grating (FBG) sensor. Experimental analysis of the synthetic jet generator amplitude-frequency characteristics were performed using CTA (hot wire anemometer) measuring techniques. Piezoelectric diaphragm in diameter of 27 mm was excited by sinusoidal voltage signal and it was fixed tightly inside the chamber of the synthetic jet generator. The number of the synthetic jet generator orifices (1 or 3) and volume of cavity (height of cavity vary from 0.5 mm to 1.5 mm) were changed. The highest value of the synthetic jet velocity 25 m/s was obtained with synthetic jet generator which has cavity 0.5 mm and 1 orifice (resonance frequency of the piezoelectric diaphragm 2.8 kHz). It can be concluded that this type of the design is preferred in order to get the peak velocity of the synthetic jet.

  19. Design and experimental characterization of a NiTi-based, high-frequency, centripetal peristaltic actuator

    NASA Astrophysics Data System (ADS)

    Borlandelli, E.; Scarselli, D.; Nespoli, A.; Rigamonti, D.; Bettini, P.; Morandini, M.; Villa, E.; Sala, G.; Quadrio, M.

    2015-03-01

    Development and experimental testing of a peristaltic device actuated by a single shape-memory NiTi wire are described. The actuator is designed to radially shrink a compliant silicone pipe, and must work on a sustained basis at an actuation frequency that is higher than those typical of NiTi actuators. Four rigid, aluminum-made circular sectors are sitting along the pipe circumference and provide the required NiTi wire housing. The aluminum assembly acts as geometrical amplifier of the wire contraction and as heat sink required to dissipate the thermal energy of the wire during the cooling phase. We present and discuss the full experimental investigation of the actuator performance, measured in terms of its ability to reduce the pipe diameter, at a sustained frequency of 1.5 Hz. Moreover, we investigate how the diameter contraction is affected by various design parameters as well as actuation frequencies up to 4 Hz. We manage to make the NiTi wire work at 3% in strain, cyclically providing the designed pipe wall displacement. The actuator performance is found to decay approximately linearly with actuation frequencies up to 4 Hz. Also, the interface between the wire and the aluminum parts is found to be essential in defining the functional performance of the actuator.

  20. Design concept and preliminary experimental demonstration of MEMS gyroscopes with 4-DOF master-slave architecture

    NASA Astrophysics Data System (ADS)

    Acar, Cenk; Shkel, Andrei M.

    2002-07-01

    This paper reports a design concept for MEMS gyroscopes that shifts the complexity of the design from control architecture to system dynamics, utilizing the passive disturbance rejection capability of the 4-DOF dynamical system. Specifically, a novel wide-bandwidth micromachined gyroscope design approach based on increasing the degrees-of-freedom of the oscillatory system by the use of two independently oscillating interconnected proof masses is presented along with preliminary experimental demonstration of implementation feasibility. With the concept of using a 4-DOF system, inherent disturbance rejection is achieved due to the wide operation frequency range of the dynamic system, providing reduced sensitivity to structural and thermal parameter fluctuations. Thus, less demanding active control strategies are required for operation under presence of perturbations. The fabricated prototype dual-mass gyroscopes successfully demonstrated a dramatically wide driving frequency range within where the drive direction oscillation amplitude varies insignificantly without any active control, in contrast to the conventional gyroscopes where the mass has to be sustained in constant amplitude oscillation in a very narrow frequency band. Mechanical amplification of driven mass oscillation by the sensing element was also experimentally demonstrated, providing large oscillation amplitudes, which is crucial for sensor performance.