Pant, Apourv; Rai, J P N
2018-04-15
Two phase bioreactor was constructed, designed and developed to evaluate the chlorpyrifos remediation. Six biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature and soil micro flora load) were evaluated by design of experimental (DOE) methodology employing Taguchi's orthogonal array (OA). The selected six factors were considered at two levels L-8 array (2^7, 15 experiments) in the experimental design. The optimum operating conditions obtained from the methodology showed enhanced chlorpyrifos degradation from 283.86µg/g to 955.364µg/g by overall 70.34% of enhancement. In the present study, with the help of few well defined experimental parameters a mathematical model was constructed to understand the complex bioremediation process and optimize the approximate parameters upto great accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Experimental Learning Enhancing Improvisation Skills
ERIC Educational Resources Information Center
Pereira Christopoulos, Tania; Wilner, Adriana; Trindade Bestetti, Maria Luisa
2016-01-01
Purpose: This study aims to present improvisation training and experimentation as an alternative method to deal with unexpected events in which structured processes do not seem to work. Design/Methodology/Approach: Based on the literature of sensemaking and improvisation, the study designs a framework and process model of experimental learning…
2017-08-01
of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories
ERIC Educational Resources Information Center
Maseda, F. J.; Martija, I.; Martija, I.
2012-01-01
This paper describes a novel Electrical Machine and Power Electronic Training Tool (EM&PE[subscript TT]), a methodology for using it, and associated experimental educational activities. The training tool is implemented by recreating a whole power electronics system, divided into modular blocks. This process is similar to that applied when…
Workload-Based Automated Interface Mode Selection
2012-03-22
Process . . . . . . . . . . . . . . . . . . . . . 31 3.5.10 Agent Reward Function . . . . . . . . . . . . . . . . 31 3.5.11 Accelerated Learning... Strategies . . . . . . . . . . . . 31 4. Experimental Methodology . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1 System Engineering Methodology...26 5. Agent state function. . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6. Agent reward function
ERIC Educational Resources Information Center
Lin, Wen-Chuan
2012-01-01
Traditional, cognitive-oriented theories of English language acquisition tend to employ experimental modes of inquiry and neglect social, cultural and historical contexts. In this paper, I review the theoretical debate over methodology by examining ontological, epistemological and methodological controversies around cognitive-oriented theories. I…
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
Options for Hardening FinFETS with Flowable Oxide Between Fins
2017-03-01
thus hardening by process is needed. Using the methodology of CV measurements on inexpensive experimental blanket oxides we have determined options...NY 10598 Abstract: A methodology using radiation-induced charge measurements by CV techniques on blanket oxides is shown to aid in the choice...of process options for hardening FinFETs. Net positive charge in flowable oxides was reduced by 50 % using a simple non -intrusive process change
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.
1994-01-01
This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.
Parallel processing in a host plus multiple array processor system for radar
NASA Technical Reports Server (NTRS)
Barkan, B. Z.
1983-01-01
Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.
Bayesian truthing as experimental verification of C4ISR sensors
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas; Romanov, Volodymyr; Wang, Wenjian; Nielsen, Thomas; Kostrzewski, Andrew
2015-05-01
In this paper, the general methodology for experimental verification/validation of C4ISR and other sensors' performance, is presented, based on Bayesian inference, in general, and binary sensors, in particular. This methodology, called Bayesian Truthing, defines Performance Metrics for binary sensors in: physics, optics, electronics, medicine, law enforcement, C3ISR, QC, ATR (Automatic Target Recognition), terrorism related events, and many others. For Bayesian Truthing, the sensing medium itself is not what is truly important; it is how the decision process is affected.
ERIC Educational Resources Information Center
Kraaijvanger, Richard G.; Veldkamp, Tom
2017-01-01
Purpose: This paper analyses research strategies followed by farmer groups in Tigray, that were involved in participatory experimentation. Understanding choices made by farmers in such experimentation processes is important to understand reasons why farmers in Tigray often hesitated to adopt recommended practices. Design/Methodology/Approach: A…
NASA Astrophysics Data System (ADS)
Schmidt, S.; Heyns, P. S.; de Villiers, J. P.
2018-02-01
In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.
The Methodology of Calculation of Cutting Forces When Machining Composite Materials
NASA Astrophysics Data System (ADS)
Rychkov, D. A.; Yanyushkin, A. S.
2016-08-01
Cutting of composite materials has specific features and is different from the processing of metals. When this characteristic intense wear of the cutting tool. An important criterion in the selection process parameters composite processing is the value of the cutting forces, which depends on many factors and is determined experimentally, it is not always appropriate. The study developed a method of determining the cutting forces when machining composite materials and the comparative evaluation of the calculated and actual values of cutting forces. The methodology for calculating cutting forces into account specific features of the cutting tool and the extent of wear, the strength properties of the processed material and cutting conditions. Experimental studies conducted with fiberglass milling cutter equipped with elements of hard metal VK3M. The discrepancy between the estimated and the actual values of the cutting force is not more than 10%.
Policy capturing as a method of quantifying the determinants of landscape preference
Dennis B. Propst
1979-01-01
Policy Capturing, a potential methodology for evaluating landscape preference, was described and tested. This methodology results in a mathematical model that theoretically represents the human decision-making process. Under experimental conditions, judges were asked to express their preferences for scenes of the Blue Ridge Parkway. An equation which "captures,...
Yemets, Anatoliy V; Donchenko, Viktoriya I; Scrinick, Eugenia O
2018-01-01
Introduction: Experimental work is aimed at introducing theoretical and methodological foundations for the professional training of the future doctor. The aim: Identify the dynamics of quantitative and qualitative indicators of the readiness of a specialist in medicine. Materials and methods: The article presents the course and results of the experimental work of the conditions of forming the readiness of future specialists in medicine. Results: Our methodical bases for studying the disciplines of the general practice and specialized professional stage of experimental training of future physicians have been worked out. Conclusions: It is developed taking into account the peculiarities of future physician training of materials for various stages of experimental implementation in the educational process of higher medical educational institutions.
Experimental equipment for measuring of rotary air motors parameters
NASA Astrophysics Data System (ADS)
Dvořák, Lukáš; Fojtášek, Kamil; Řeháček, Vojtěch
In the article the construction of an experimental device for measuring the parameters of small rotary air motors is described. Further a measurement methodology and measured data processing are described. At the end of the article characteristics of the chosen air motor are presented.
Experimental Methodology for Measuring Combustion and Injection-Coupled Responses
NASA Technical Reports Server (NTRS)
Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.
2006-01-01
A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.
NASA Astrophysics Data System (ADS)
Patole, Pralhad B.; Kulkarni, Vivek V.
2018-06-01
This paper presents an investigation into the minimum quantity lubrication mode with nano fluid during turning of alloy steel AISI 4340 work piece material with the objective of experimental model in order to predict surface roughness and cutting force and analyze effect of process parameters on machinability. Full factorial design matrix was used for experimental plan. According to design of experiment surface roughness and cutting force were measured. The relationship between the response variables and the process parameters is determined through the response surface methodology, using a quadratic regression model. Results show how much surface roughness is mainly influenced by feed rate and cutting speed. The depth of cut exhibits maximum influence on cutting force components as compared to the feed rate and cutting speed. The values predicted from the model and experimental values are very close to each other.
Hirschi, Jennifer S.; Takeya, Tetsuya; Hang, Chao; Singleton, Daniel A.
2009-01-01
We suggest here and evaluate a methodology for the measurement of specific interatomic distances from a combination of theoretical calculations and experimentally measured 13C kinetic isotope effects. This process takes advantage of a broad diversity of transition structures available for the epoxidation of 2-methyl-2-butene with oxaziridines. From the isotope effects calculated for these transition structures, a theory-independent relationship between the C-O bond distances of the newly forming bonds and the isotope effects is established. Within the precision of the measurement, this relationship in combination with the experimental isotope effects provides a highly accurate picture of the C-O bonds forming at the transition state. The diversity of transition structures also allows an evaluation of the Schramm process for defining transition state geometries based on calculations at non-stationary points, and the methodology is found to be reasonably accurate. PMID:19146405
Making Mentoring Stick: A Case Study
ERIC Educational Resources Information Center
Karallis, Takis; Sandelands, Eric
2009-01-01
Purpose: This paper seeks to provide a case study of the mentoring process within Kentz Engineers & Constructors. Design/methodology/approach: The paper reflects the experiences of those leading the mentoring process within Kentz with insights extracted from a process of action, reflection and live experimentation. Findings: The paper…
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
Surface laser marking optimization using an experimental design approach
NASA Astrophysics Data System (ADS)
Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.
2017-04-01
Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.
Prediction of Shrinkage Porosity Defect in Sand Casting Process of LM25
NASA Astrophysics Data System (ADS)
Rathod, Hardik; Dhulia, Jay K.; Maniar, Nirav P.
2017-08-01
In the present worldwide and aggressive environment, foundry commercial enterprises need to perform productively with least number of rejections and create casting parts in shortest lead time. It has become extremely difficult for foundry industries to meet demands of defects free casting and meet strict delivery schedules. The process of casting solidification is complex in nature. Prediction of shrinkage defect in metal casting is one of the critical concern in foundries and is one of the potential research areas in casting. Due to increasing pressure to improve quality and to reduce cost, it is very essential to upgrade the level of current methodology used in foundries. In the present research work, prediction methodology of shrinkage porosity defect in sand casting process of LM25 using experimentation and ANSYS is proposed. The objectives successfully achieved are prediction of shrinkage porosity distribution in Al-Si casting and determining effectiveness of investigated function for predicting shrinkage porosity by correlating results of simulating studies to those obtained experimentally. The real-time application of the research reflects from the fact that experimentation is performed on 9 different Y junctions at foundry industry and practical data obtained from experimentation are used for simulation.
A prototype software methodology for the rapid evaluation of biomanufacturing process options.
Chhatre, Sunil; Francis, Richard; O'Donovan, Kieran; Titchener-Hooker, Nigel J; Newcombe, Anthony R; Keshavarz-Moore, Eli
2007-10-01
A three-layered simulation methodology is described that rapidly evaluates biomanufacturing process options. In each layer, inferior options are screened out, while more promising candidates are evaluated further in the subsequent, more refined layer, which uses more rigorous models that require more data from time-consuming experimentation. Screening ensures laboratory studies are focused only on options showing the greatest potential. To simplify the screening, outputs of production level, cost and time are combined into a single value using multi-attribute-decision-making techniques. The methodology was illustrated by evaluating alternatives to an FDA (U.S. Food and Drug Administration)-approved process manufacturing rattlesnake antivenom. Currently, antivenom antibodies are recovered from ovine serum by precipitation/centrifugation and proteolyzed before chromatographic purification. Alternatives included increasing the feed volume, replacing centrifugation with microfiltration and replacing precipitation/centrifugation with a Protein G column. The best alternative used a higher feed volume and a Protein G step. By rapidly evaluating the attractiveness of options, the methodology facilitates efficient and cost-effective process development.
NASA Astrophysics Data System (ADS)
Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel
2013-09-01
Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.
NASA Astrophysics Data System (ADS)
Ribeiro, José B.; Silva, Cristóvão; Mendes, Ricardo; Plaksin, I.; Campos, Jose
2012-03-01
The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitably performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment, and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a reasonable description of the experimental data.
Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen
2015-01-01
The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms. PMID:25583870
Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection.
Caiazzo, Fabrizia; Caggiano, Alessandra
2018-04-20
Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti⁻6Al⁻4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.
Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection
2018-01-01
Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti–6Al–4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data. PMID:29677114
GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel
2012-01-01
The Response Surface Methodology (RSM) was applied as a tool for the optimization of the operational conditions of the photo-degradation of highly concentrated PY12 wastewater, resulting from a textile industry located in the suburbs of Medellin (Colombia). The Box-Behnken experimental Design (BBD) was chosen for the purpose of response optimization. The photo-Fenton process was carried out in a laboratory-scale batch photo-reactor. A multifactorial experimental design was proposed, including the following variables: the initial dyestuff concentration, the H(2)O(2) and the Fe(+2) concentrations, as well as the UV wavelength radiation. The photo-Fenton process performed at the optimized conditions resulted in ca. 100% of dyestuff decolorization, 92% of COD and 82% of TOC degradation. A kinetic study was accomplished, including the identification of some intermediate compounds generated during the oxidation process. The water biodegradability reached a final DBO(5)/DQO = 0.86 value.
Performance Optimization Control of ECH using Fuzzy Inference Application
NASA Astrophysics Data System (ADS)
Dubey, Abhay Kumar
Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.
ADM1-based methodology for the characterisation of the influent sludge in anaerobic reactors.
Huete, E; de Gracia, M; Ayesa, E; Garcia-Heras, J L
2006-01-01
This paper presents a systematic methodology to characterise the influent sludge in terms of the ADM1 components from the experimental measurements traditionally used in wastewater engineering. For this purpose, a complete characterisation of the model components in their elemental mass fractions and charge has been used, making a rigorous mass balance for all the process transformations and enabling the future connection with other unit-process models. It also makes possible the application of mathematical algorithms for the optimal characterisation of several components poorly defined in the ADM1 report. Additionally, decay and disintegration have been necessarily uncoupled so that the decay proceeds directly to hydrolysis instead of producing intermediate composites. The proposed methodology has been applied to the particular experimental work of a pilot-scale CSTR treating real sewage sludge, a mixture of primary and secondary sludge. The results obtained have shown a good characterisation of the influent reflected in good model predictions. However, its limitations for an appropriate prediction of alkalinity and carbon percentages in biogas suggest the convenience of including the elemental characterisation of the process in terms of carbon in the analytical program.
González-Sáiz, J M; Esteban-Díez, I; Rodríguez-Tecedor, S; Pérez-Del-Notario, N; Arenzana-Rámila, I; Pizarro, C
2014-12-15
The aim of the present work was to evaluate the effect of the main factors conditioning accelerated ageing processes (oxygen dose, chip dose, wood origin, toasting degree and maceration time) on the phenolic and chromatic profiles of red wines by using a multivariate strategy based on experimental design methodology. The results obtained revealed that the concentrations of monomeric anthocyanins and flavan-3-ols could be modified through the application of particular experimental conditions. This fact was particularly remarkable since changes in phenolic profile were closely linked to changes observed in chromatic parameters. The main strength of this study lies in the possibility of using its conclusions as a basis to make wines with specific colour properties based on quality criteria. To our knowledge, the influence of such a large number of alternative ageing parameters on wine phenolic composition and chromatic attributes has not been studied previously using a comprehensive experimental design methodology. Copyright © 2014 Elsevier Ltd. All rights reserved.
Aviation Human-in-the-Loop Simulation Studies: Experimental Planning, Design, and Data Management
2014-01-01
Aviation Human-in-the-Loop Simulation Studies: Experimental Planning, Design , and Data Management Kevin W. Williams1 Bonny Christopher2 Gena...Simulation Studies: Experimental Planning, Design , and Data Management January 2014 6. Performing Organization Code 7. Author(s) 8. Performing...describe the process by which we designed our human-in-the-loop (HITL) simulation study and the methodology used to collect and analyze the results
Proof test methodology for composites
NASA Technical Reports Server (NTRS)
Wu, Edward M.; Bell, David K.
1992-01-01
The special requirements for proof test of composites are identified based on the underlying failure process of composites. Two proof test methods are developed to eliminate the inevitable weak fiber sites without also causing flaw clustering which weakens the post-proof-test composite. Significant reliability enhancement by these proof test methods has been experimentally demonstrated for composite strength and composite life in tension. This basic proof test methodology is relevant to the certification and acceptance of critical composite structures. It can also be applied to the manufacturing process development to achieve zero-reject for very large composite structures.
Computer-Aided Sensor Development Focused on Security Issues.
Bialas, Andrzej
2016-05-26
The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research.
Computer-Aided Sensor Development Focused on Security Issues
Bialas, Andrzej
2016-01-01
The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research. PMID:27240360
1981-01-01
per-rev, ring weighting factor, etc.) and with compression system design . A detailed description of the SAE methodology is provided in Ref. 1...offers insights into the practical application of experimental aeromechanical procedures and establishes the process of valid design assessment, avoiding...considerations given to the total engine system. Design Verification in the Experimental Laboratory Certain key parameters are influencing the design of modern
Using Processing Instruction for the Acquisition of English Present Perfect of Filipinos
ERIC Educational Resources Information Center
Erfe, Jonathan P.; Lintao, Rachelle B.
2012-01-01
This is an experimental study on the relative effects of Van Patten's Processing Instruction (PI) (1996, 2002), a "psycholinguistically-motivated" intervention in teaching second-language (L2) grammar, on young-adult Filipino learners of English. A growing body of research on this methodological alternative, which establishes…
NASA Astrophysics Data System (ADS)
Navarro, Manuel
2014-05-01
This paper presents a model of how children generate concrete concepts from perception through processes of differentiation and integration. The model informs the design of a novel methodology (evolutionary maps or emaps), whose implementation on certain domains unfolds the web of itineraries that children may follow in the construction of concrete conceptual knowledge and pinpoints, for each conception, the architecture of the conceptual change that leads to the scientific concept. Remarkably, the generative character of its syntax yields conceptions that, if unknown, amount to predictions that can be tested experimentally. Its application to the diurnal cycle (including the sun's trajectory in the sky) indicates that the model is correct and the methodology works (in some domains). Specifically, said emap predicts a number of exotic trajectories of the sun in the sky that, in the experimental work, were drawn spontaneously both on paper and a dome. Additionally, the application of the emaps theoretical framework in clinical interviews has provided new insight into other cognitive processes. The field of validity of the methodology and its possible applications to science education are discussed.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat
2013-01-01
Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.
NASA Astrophysics Data System (ADS)
Ribeiro, Jose; Silva, Cristovao; Mendes, Ricardo; Plaksin, Igor; Campos, Jose
2011-06-01
The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitable performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.
Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen
2015-02-28
The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir
2016-07-15
Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.
Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N
2005-06-20
The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.
Metaphorical Salience in Artistic Text Processing: Evidence From Eye Movement.
Novikova, Eleonora G; Janyan, Armina; Tsaregorodtseva, Oksana V
2015-01-01
The study aimed to explore processing difference between a literal phrase and a metaphoric one. Unlike artificially created stimuli in most experimental research, an artistic text with an ambiguous binary metaphoric phrase was used. Eye tracking methodology was applied. Results suggested difference between the two types of phrases in both early and late processing measures. © The Author(s) 2015.
Experimental Evaluation Methodology for Spacecraft Proximity Maneuvers in a Dynamic Environment
2017-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA DISSERTATION EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A DYNAMIC...29, 2014 – June 16, 2017 4. TITLE AND SUBTITLE EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A DYNAMIC ENVIRONMENT 5...LEFT BLANK ii Approved for public release. Distribution is unlimited. EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A
ERIC Educational Resources Information Center
Ausburn, Lynna J.; Ausburn, Floyd B.; Kroutter, Paul
2010-01-01
Virtual reality (VR) technology has demonstrated effectiveness in a variety of technical learning situations, yet little is known about its differential effects on learners with different levels of visual processing skill. This small-scale exploratory study tested VR through quasi-experimental methodology and a theoretical/conceptual framework…
ERIC Educational Resources Information Center
Zimman, Richard N.
Using ethnographic case study methodology (involving open-ended interviews, participant observation, and document analysis) theories of administrative organization, processes, and behavior were tested during a three-week observation of a model comprehensive (experimental) high school. Although the study is limited in its general application, it…
ERIC Educational Resources Information Center
Akpo, Essegbemon; Crane, Todd A.; Vissoh, Pierre V.; Tossou, Rigobert C.
2015-01-01
Purpose: Changing research design and methodologies regarding how researchers articulate with end-users of technology is an important consideration in developing sustainable agricultural practices. This paper analyzes a joint experiment as a multi-stakeholder process and contributes to understand how the way of organizing social learning affects…
Evaluation of STD/AIDS prevention programs: a review of approaches and methodologies.
da Cruz, Marly Marques; dos Santos, Elizabeth Moreira; Monteiro, Simone
2007-05-01
The article presents a review of approaches and methodologies in the evaluation of STD/AIDS prevention programs, searching for theoretical and methodological support for the institutionalization of evaluation and decision-making. The review included the MEDLINE, SciELO, and ISI Web of Science databases and other sources like textbooks and congress abstracts from 1990 to 2005, with the key words: "evaluation", "programs", "prevention", "STD/AIDS", and similar terms. The papers showed a predominance of quantitative outcome or impact evaluative studies with an experimental or quasi-experimental design. The main use of evaluation is accountability, although knowledge output and program improvement were also identified in the studies. Only a few evaluative studies contemplate process evaluation and its relationship to the contexts. The review aimed to contribute to the debate on STD/AIDS, which requires more effective, consistent, and sustainable decisions in the field of prevention.
NASA Astrophysics Data System (ADS)
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
Study of the Effect of Swelling on Irradiation Assisted Stress Corrosion Cracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teysseyre, Sebastien Paul
2016-09-01
This report describes the methodology used to study the effect of swelling on the crack growth rate of an irradiation-assisted stress corrosion crack that is propagating in highly irradiated stainless steel 304 material irradiated to 33 dpa in the Experimental Breeder Reactor-II. The material selection, specimens design, experimental apparatus and processes are described. The results of the current test are presented.
The First Static and Dynamic Analysis of 3-D Printed Sintered Ceramics for Body Armor Applications
2016-09-01
evaluate sintered alumina tiles produced by 3-D printing methodology. This report examines the static and quasi -static parameters (including density...Figures iv List of Tables iv Acknowledgments v 1. Introduction 1 2. Processing and Experimental Procedures 1 3. Results and Discussion 7 4...6 Fig. 8 Experimental setup for recording fracture .............................................7 Fig. 9 Rod projectile
Considerations for the Systematic Analysis and Use of Single-Case Research
ERIC Educational Resources Information Center
Horner, Robert H.; Swaminathan, Hariharan; Sugai, George; Smolkowski, Keith
2012-01-01
Single-case research designs provide a rigorous research methodology for documenting experimental control. If single-case methods are to gain wider application, however, a need exists to define more clearly (a) the logic of single-case designs, (b) the process and decision rules for visual analysis, and (c) an accepted process for integrating…
Life Design Counseling Group Intervention with Portuguese Adolescents: A Process and Outcome Study
ERIC Educational Resources Information Center
Cardoso, Paulo; Janeiro, Isabel Nunes; Duarte, Maria Eduarda
2018-01-01
This article examines the process and outcome of a life design counseling group intervention with students in Grades 9 and 12. First, we applied a quasi-experimental methodology to analyze the intervention's effectiveness in promoting career certainty, career decision-making, self-efficacy, and career adaptability in a sample of 236 students.…
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Huang, Dao-sheng; Shi, Wei; Han, Lei; Sun, Ke; Chen, Guang-bo; Wu Jian-xiong; Xu, Gui-hong; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei
2015-06-01
To optimize the belt drying process conditions optimization of Gardeniae Fructus extract from Reduning injection by Box-Behnken design-response surface methodology, on the basis of single factor experiment, a three-factor and three-level Box-Behnken experimental design was employed to optimize the drying technology of Gardeniae Fructus extract from Reduning injection. With drying temperature, drying time, feeding speed as independent variables and the content of geniposide as dependent variable, the experimental data were fitted to a second order polynomial equation, establishing the mathematical relationship between the content of geniposide and respective variables. With the experimental data analyzed by Design-Expert 8. 0. 6, the optimal drying parameter was as follows: the drying temperature was 98.5 degrees C , the drying time was 89 min, the feeding speed was 99.8 r x min(-1). Three verification experiments were taked under this technology and the measured average content of geniposide was 564. 108 mg x g(-1), which was close to the model prediction: 563. 307 mg x g(-1). According to the verification test, the Gardeniae Fructus belt drying process is steady and feasible. So single factor experiments combined with response surface method (RSM) could be used to optimize the drying technology of Reduning injection Gardenia extract.
Mainstreaming Caenorhabditis elegans in experimental evolution.
Gray, Jeremy C; Cutter, Asher D
2014-03-07
Experimental evolution provides a powerful manipulative tool for probing evolutionary process and mechanism. As this approach to hypothesis testing has taken purchase in biology, so too has the number of experimental systems that use it, each with its own unique strengths and weaknesses. The depth of biological knowledge about Caenorhabditis nematodes, combined with their laboratory tractability, positions them well for exploiting experimental evolution in animal systems to understand deep questions in evolution and ecology, as well as in molecular genetics and systems biology. To date, Caenorhabditis elegans and related species have proved themselves in experimental evolution studies of the process of mutation, host-pathogen coevolution, mating system evolution and life-history theory. Yet these organisms are not broadly recognized for their utility for evolution experiments and remain underexploited. Here, we outline this experimental evolution work undertaken so far in Caenorhabditis, detail simple methodological tricks that can be exploited and identify research areas that are ripe for future discovery.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.
1992-01-01
The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.
Experimental Designs in Sentence Processing Research: A Methodological Review and User's Guide
ERIC Educational Resources Information Center
Keating, Gregory D.; Jegerski, Jill
2015-01-01
Since the publication of Clahsen and Felser's (2006) keynote article on grammatical processing in language learners, the online study of sentence comprehension in adult second language (L2) learners has quickly grown into a vibrant and prolific subfield of SLA. As online methods begin to establish a foothold in SLA research, it is important…
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2010-01-01
Methodologies for understanding the plastic deformation mechanisms related to crack propagation at the nano-, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2011-01-01
Methodologies for understanding the plastic deformation mechanisms related 10 crack propagation at the nano, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
NASA Astrophysics Data System (ADS)
Castro, Luz Angelica; Hoyos, Mauricio
2016-04-01
We propose an experimental methodology to determine the secondary Bjerknes force between rigid particles. Measurements done for different particles sizes showed acoustical inter particles interactions. We use and extend the methodology presented in a previous work. The determination of this force will lead us a better understanding of the aggregation process in acoustic resonators. We report in this work, the results of two parabolic flights campaigns performed at the Airbus A300 ZERO-G (Novespace, France).
Using experimental design to define boundary manikins.
Bertilsson, Erik; Högberg, Dan; Hanson, Lars
2012-01-01
When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.
Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.
Trujillo, Carlos; Garcia-Sucerquia, Jorge
2013-09-01
The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.
Wundt, Völkerpsychologie, and experimental social psychology.
Greenwood, John D
2003-02-01
Wilhelm Wundt distinguished between "experimental psychology" and Volkerpsychologie. It is often claimed that Wundt maintained that social psychological phenomena, the subject matter of Völkerpsychologie, could not be investigated experimentally but must be explored via comparative-historical methods. In this article it is argued that it is doubtful if many of the passages usually cited as evidence that Wundt held such a view actually such such a view. It is also argued that if Wundt did hold such a view, it was inconsistent with his own general theoretical position and methodological practice. It is suggested that it is anachronistic to attribute such a view to Wundt, because he appears to have had little interest in the experimental analysis of the synchronic social dynamics of psychological processes. Most of Wundt's arguments about the inappropriateness of experimentation were directed against the introspective analysis of diachronic historical processes.
NASA Astrophysics Data System (ADS)
Hidayanti, Nur; Suryanto, A.; Qadariyah, L.; Prihatini, P.; Mahfud, Mahfud
2015-12-01
A simple batch process was designed for the transesterification of coconut oil to alkyl esters using microwave assisted method. The product with yield above 93.225% of alkyl ester is called the biodiesel fuel. Response surface methodology was used to design the experiment and obtain the maximum possible yield of biodiesel in the microwave-assisted reaction from coconut oil with KOH as the catalyst. The results showed that the time reaction and concentration of KOH catalyst have significant effects on yield of alkyl ester. Based on the response surface methodology using the selected operating conditions, the time of reaction and concentration of KOH catalyst in transesterification process were 150 second and 0.25%w/w, respectively. The largest predicted and experimental yield of alkyl esters (biodiesel) under the optimal conditions are 101.385% and 93.225%, respectively. Our findings confirmed the successful development of process for the transesterification reaction of coconut oil by microwave-assisted heating, which is effective and time-saving for alkyl ester production.
2014-01-01
Background In this research, the removal of natural organic matter from aqueous solutions using advanced oxidation processes (UV/H2O2) was evaluated. Therefore, the response surface methodology and Box-Behnken design matrix were employed to design the experiments and to determine the optimal conditions. The effects of various parameters such as initial concentration of H2O2 (100–180 mg/L), pH (3–11), time (10–30 min) and initial total organic carbon (TOC) concentration (4–10 mg/L) were studied. Results Analysis of variance (ANOVA), revealed a good agreement between experimental data and proposed quadratic polynomial model (R2 = 0.98). Experimental results showed that with increasing H2O2 concentration, time and decreasing in initial TOC concentration, TOC removal efficiency was increased. Neutral and nearly acidic pH values also improved the TOC removal. Accordingly, the TOC removal efficiency of 78.02% in terms of the independent variables including H2O2 concentration (100 mg/L), pH (6.12), time (22.42 min) and initial TOC concentration (4 mg/L) were optimized. Further confirmation tests under optimal conditions showed a 76.50% of TOC removal and confirmed that the model is accordance with the experiments. In addition TOC removal for natural water based on response surface methodology optimum condition was 62.15%. Conclusions This study showed that response surface methodology based on Box-Behnken method is a useful tool for optimizing the operating parameters for TOC removal using UV/H2O2 process. PMID:24735555
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
Evaluation of complex community-based childhood obesity prevention interventions.
Karacabeyli, D; Allender, S; Pinkney, S; Amed, S
2018-05-16
Multi-setting, multi-component community-based interventions have shown promise in preventing childhood obesity; however, evaluation of these complex interventions remains a challenge. The objective of the study is to systematically review published methodological approaches to outcome evaluation for multi-setting community-based childhood obesity prevention interventions and synthesize a set of pragmatic recommendations. MEDLINE, CINAHL and PsycINFO were searched from inception to 6 July 2017. Papers were included if the intervention targeted children ≤18 years, engaged at least two community sectors and described their outcome evaluation methodology. A single reviewer conducted title and abstract scans, full article review and data abstraction. Directed content analysis was performed by three reviewers to identify prevailing themes. Thirty-three studies were included, and of these, 26 employed a quasi-experimental design; the remaining were randomized control trials. Body mass index was the most commonly measured outcome, followed by health behaviour change and psychosocial outcomes. Six themes emerged, highlighting advantages and disadvantages of active vs. passive consent, quasi-experimental vs. randomized control trials, longitudinal vs. repeat cross-sectional designs and the roles of process evaluation and methodological flexibility in evaluating complex interventions. Selection of study designs and outcome measures compatible with community infrastructure, accompanied by process evaluation, may facilitate successful outcome evaluation. © 2018 World Obesity Federation.
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Furedy, John J
2003-11-01
The differential/experimental distinction that Cronbach specified is important because any adequate account of psychological phenomena requires the recognition of the validity of both approaches, and a meaningful melding of the two. This paper suggests that Pavlov's work in psychology, based on earlier traditions of inquiry that can be traced back to the pre-Socratics, provides a potential way of achieving this melding, although such features as systematic rather than anecdotal methods of observation need to be added. Pavlov's methodological behaviorist approach is contrasted with metaphysical behaviorism (as exemplified explicitly in Watson and Skinner, and implicitly in the computer-metaphorical, information-processing explanations employed by current "cognitive" psychology). A common feature of the metaphysical approach is that individual-differences variables like sex are essentially ignored, or relegated to ideological categories such as the treatment of sex as merely a "social construction." Examples of research both before and after the "cognitive revolution" are presented where experimental and differential methods are melded, and individual differences are treated as phenomena worthy of investigation rather than as nuisance factors that merely add to experimental error.
Can Reflection Boost Competences Development in Organizations?
ERIC Educational Resources Information Center
Nansubuga, Florence; Munene, John C.; Ntayi, Joseph M.
2015-01-01
Purpose: The purpose of this paper is to examine the gaps in some existing competence frameworks and investigate the power of reflection on one's behavior to improve the process of the competences development. Design/methodology/approach: The authors used a correlational design and a quasi-experimental non-equivalent group design involving a…
Essentials of multiangle data-processing methodology for smoke polluted atmospheres
Vladimir Kovalev; A. Petkov; Cyle Wold; Shawn Urbanski; WeiMin Hao
2011-01-01
Essentials for investigating smoke plume characteristics with scanning lidar are discussed. Particularly, we outline basic principles for determining dynamics, heights, and optical properties of smoke plumes and layers in wildfire-polluted atmospheres. Both simulated and experimental data obtained in vicinities of wildfires with a two-wavelength scanning lidar are...
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
Fan, HuiYin; Dumont, Marie-Josée; Simpson, Benjamin K
2017-11-01
Gelatin from salmon ( Salmo salar ) skin with high molecular weight protein chains ( α -chains) was extracted using trypsin-aided process. Response surface methodology was used to optimise the extraction parameters. Yield, hydroxyproline content and protein electrophoretic profile via sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis of gelatin were used as responses in the optimization study. The optimum conditions were determined as: trypsin concentration at 1.49 U/g; extraction temperature at 45 °C; and extraction time at 6 h 16 min. This response surface optimized model was significant and produced an experimental value (202.04 ± 8.64%) in good agreement with the predicted value (204.19%). Twofold higher yields of gelatin with high molecular weight protein chains were achieved in the optimized process with trypsin treatment when compared to the process without trypsin.
López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina
2017-05-05
In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
NASA Astrophysics Data System (ADS)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
2011-01-01
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.
NASA Technical Reports Server (NTRS)
Ankenman, Bruce; Ermer, Donald; Clum, James A.
1994-01-01
Modules dealing with statistical experimental design (SED), process modeling and improvement, and response surface methods have been developed and tested in two laboratory courses. One course was a manufacturing processes course in Mechanical Engineering and the other course was a materials processing course in Materials Science and Engineering. Each module is used as an 'experiment' in the course with the intent that subsequent course experiments will use SED methods for analysis and interpretation of data. Evaluation of the modules' effectiveness has been done by both survey questionnaires and inclusion of the module methodology in course examination questions. Results of the evaluation have been very positive. Those evaluation results and details of the modules' content and implementation are presented. The modules represent an important component for updating laboratory instruction and to provide training in quality for improved engineering practice.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Experimental Methods for Trapping Ions Using Microfabricated Surface Ion Traps
Hong, Seokjun; Lee, Minjae; Kwon, Yeong-Dae; Cho, Dong-il "Dan"; Kim, Taehyun
2017-01-01
Ions trapped in a quadrupole Paul trap have been considered one of the strong physical candidates to implement quantum information processing. This is due to their long coherence time and their capability to manipulate and detect individual quantum bits (qubits). In more recent years, microfabricated surface ion traps have received more attention for large-scale integrated qubit platforms. This paper presents a microfabrication methodology for ion traps using micro-electro-mechanical system (MEMS) technology, including the fabrication method for a 14 µm-thick dielectric layer and metal overhang structures atop the dielectric layer. In addition, an experimental procedure for trapping ytterbium (Yb) ions of isotope 174 (174Yb+) using 369.5 nm, 399 nm, and 935 nm diode lasers is described. These methodologies and procedures involve many scientific and engineering disciplines, and this paper first presents the detailed experimental procedures. The methods discussed in this paper can easily be extended to the trapping of Yb ions of isotope 171 (171Yb+) and to the manipulation of qubits. PMID:28872137
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
CO 2 laser cutting of MDF . 1. Determination of process parameter settings
NASA Astrophysics Data System (ADS)
Lum, K. C. P.; Ng, S. L.; Black, I.
2000-02-01
This paper details an investigation into the laser processing of medium-density fibreboard (MDF). Part 1 reports on the determination of process parameter settings for the effective cutting of MDF by CO 2 laser, using an established experimental methodology developed to study the interrelationship between and effects of varying laser set-up parameters. Results are presented for both continuous wave (CW) and pulse mode (PM) cutting, and the associated cut quality effects have been commented on.
Morales-Pérez, Ariadna A; Maravilla, Pablo; Solís-López, Myriam; Schouwenaars, Rafael; Durán-Moreno, Alfonso; Ramírez-Zamora, Rosa-María
2016-01-01
An experimental design methodology was used to optimize the synthesis of an iron-supported nanocatalyst as well as the inactivation process of Ascaris eggs (Ae) using this material. A factor screening design was used for identifying the significant experimental factors for nanocatalyst support (supported %Fe, (w/w), temperature and time of calcination) and for the inactivation process called the heterogeneous Fenton-like reaction (H2O2 dose, mass ratio Fe/H2O2, pH and reaction time). The optimization of the significant factors was carried out using a face-centered central composite design. The optimal operating conditions for both processes were estimated with a statistical model and implemented experimentally with five replicates. The predicted value of the Ae inactivation rate was close to the laboratory results. At the optimal operating conditions of the nanocatalyst production and Ae inactivation process, the Ascaris ova showed genomic damage to the point that no cell reparation was possible showing that this advanced oxidation process was highly efficient for inactivating this pathogen.
Neural mechanisms of rhythm perception: current findings and future perspectives.
Grahn, Jessica A
2012-10-01
Perception of temporal patterns is fundamental to normal hearing, speech, motor control, and music. Certain types of pattern understanding are unique to humans, such as musical rhythm. Although human responses to musical rhythm are universal, there is much we do not understand about how rhythm is processed in the brain. Here, I consider findings from research into basic timing mechanisms and models through to the neuroscience of rhythm and meter. A network of neural areas, including motor regions, is regularly implicated in basic timing as well as processing of musical rhythm. However, fractionating the specific roles of individual areas in this network has remained a challenge. Distinctions in activity patterns appear between "automatic" and "cognitively controlled" timing processes, but the perception of musical rhythm requires features of both automatic and controlled processes. In addition, many experimental manipulations rely on participants directing their attention toward or away from certain stimulus features, and measuring corresponding differences in neural activity. Many temporal features, however, are implicitly processed whether attended to or not, making it difficult to create controlled baseline conditions for experimental comparisons. The variety of stimuli, paradigms, and definitions can further complicate comparisons across domains or methodologies. Despite these challenges, the high level of interest and multitude of methodological approaches from different cognitive domains (including music, language, and motor learning) have yielded new insights and hold promise for future progress. Copyright © 2012 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik
2017-08-01
Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.
Experimental Validation of an Integrated Controls-Structures Design Methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.
1996-01-01
The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.
Eye Movement Correlates of Acquired Central Dyslexia
ERIC Educational Resources Information Center
Schattka, Kerstin I.; Radach, Ralph; Huber, Walter
2010-01-01
Based on recent progress in theory and measurement techniques, the analysis of eye movements has become one of the major methodological tools in experimental reading research. Our work uses this approach to advance the understanding of impaired information processing in acquired central dyslexia of stroke patients with aphasia. Up to now there has…
Shirodkar, Priyanka V; Muraleedharan, Usha Devi
2017-11-26
Amylases are a group of enzymes with a wide variety of industrial applications. Enhancement of α-amylase production from the marine protists, thraustochytrids has been attempted for the first time by applying statistical-based experimental designs using response surface methodology (RSM) and genetic algorithm (GA) for optimization of the most influencing process variables. A full factorial central composite experimental design was used to study the cumulative interactive effect of nutritional components viz., glucose, corn starch, and yeast extract. RSM was performed on two objectives, that is, growth of Ulkenia sp. AH-2 (ATCC® PRA-296) and α-amylase activity. When GA was conducted for maximization of the enzyme activity, the optimal α-amylase activity was found to be 71.20 U/mL which was close to that obtained by RSM (71.93 U/mL), both of which were in agreement with the predicted value of 72.37 U/mL. Optimal growth at the optimized process variables was found to be 1.89A 660nm . The optimized medium increased α-amylase production by 1.2-fold.
Critical considerations when planning experimental in vivo studies in dental traumatology.
Andreasen, Jens O; Andersson, Lars
2011-08-01
In vivo studies are sometimes needed to understand healing processes after trauma. For several reasons, not the least ethical, such studies have to be carefully planned and important considerations have to be taken into account about suitability of the experimental model, sample size and optimizing the accuracy of the analysis. Several manuscripts of in vivo studies are submitted for publication to Dental Traumatology and rejected because of inadequate design, methodology or insufficient documentation of the results. The authors have substantial experience in experimental in vivo studies of tissue healing in dental traumatology and share their knowledge regarding critical considerations when planning experimental in vivo studies. © 2011 John Wiley & Sons A/S.
Kubelka, Jan
2009-04-01
Many important biochemical processes occur on the time-scales of nanoseconds and microseconds. The introduction of the laser temperature-jump (T-jump) to biophysics more than a decade ago opened these previously inaccessible time regimes up to direct experimental observation. Since then, laser T-jump methodology has evolved into one of the most versatile and generally applicable methods for studying fast biomolecular kinetics. This perspective is a review of the principles and applications of the laser T-jump technique in biophysics. A brief overview of the T-jump relaxation kinetics and the historical development of laser T-jump methodology is presented. The physical principles and practical experimental considerations that are important for the design of the laser T-jump experiments are summarized. These include the Raman conversion for generating heating pulses, considerations of size, duration and uniformity of the temperature jump, as well as potential adverse effects due to photo-acoustic waves, cavitation and thermal lensing, and their elimination. The laser T-jump apparatus developed at the NIH Laboratory of Chemical Physics is described in detail along with a brief survey of other laser T-jump designs in use today. Finally, applications of the laser T-jump in biophysics are reviewed, with an emphasis on the broad range of problems where the laser T-jump methodology has provided important new results and insights into the dynamics of the biomolecular processes.
Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.
Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A
2015-02-01
This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.
A CWT-based methodology for piston slap experimental characterization
NASA Astrophysics Data System (ADS)
Buzzoni, M.; Mucchi, E.; Dalpiaz, G.
2017-03-01
Noise and vibration control in mechanical systems has become ever more significant for automotive industry where the comfort of the passenger compartment represents a challenging issue for car manufacturers. The reduction of piston slap noise is pivotal for a good design of IC engines. In this scenario, a methodology has been developed for the vibro-acoustic assessment of IC diesel engines by means of design changes in piston to cylinder bore clearance. Vibration signals have been analysed by means of advanced signal processing techniques taking advantage of cyclostationarity theory. The procedure departs from the analysis of the Continuous Wavelet Transform (CWT) in order to identify a representative frequency band of piston slap phenomenon. Such a frequency band has been exploited as the input data in the further signal processing analysis that involves the envelope analysis of the second order cyclostationary component of the signal. The second order harmonic component has been used as the benchmark parameter of piston slap noise. An experimental procedure of vibrational benchmarking is proposed and verified at different operational conditions in real IC engines actually equipped on cars. This study clearly underlines the crucial role of the transducer positioning when differences among real piston-to-cylinder clearances are considered. In particular, the proposed methodology is effective for the sensors placed on the outer cylinder wall in all the tested conditions.
The integrative review: updated methodology.
Whittemore, Robin; Knafl, Kathleen
2005-12-01
The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process. Recent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated. A modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review. An updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actualmore » experimental observations.« less
Solar energy program evaluation: an introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
deLeon, P.
The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less
Passive and semi-active heave compensator: Project design methodology and control strategies.
Cuellar Sanchez, William Humberto; Linhares, Tássio Melo; Neto, André Benine; Fortaleza, Eugênio Libório Feitosa
2017-01-01
Heave compensator is a system that mitigates transmission of heave movement from vessels to the equipment in the vessel. In drilling industry, a heave compensator enables drilling in offshore environments. Heave compensator attenuates movement transmitted from the vessel to the drill string and drill bit ensuring security and efficiency of the offshore drilling process. Common types of heave compensators are passive, active and semi-active compensators. This article presents 4 main points. First, a bulk modulus analysis obtains a simple condition to determine if the bulk modulus can be neglected in the design of hydropneumatic passive heave compensator. Second, the methodology to design passive heave compensators with the desired frequency response. Third, four control methodologies for semi-active heave compensator are tested and compared numerically. Lastly, we show experimental results obtained from a prototype with the methodology developed to design passive heave compensator.
Probabilistic simulation of multi-scale composite behavior
NASA Technical Reports Server (NTRS)
Liaw, D. G.; Shiao, M. C.; Singhal, S. N.; Chamis, Christos C.
1993-01-01
A methodology is developed to computationally assess the probabilistic composite material properties at all composite scale levels due to the uncertainties in the constituent (fiber and matrix) properties and in the fabrication process variables. The methodology is computationally efficient for simulating the probability distributions of material properties. The sensitivity of the probabilistic composite material property to each random variable is determined. This information can be used to reduce undesirable uncertainties in material properties at the macro scale of the composite by reducing the uncertainties in the most influential random variables at the micro scale. This methodology was implemented into the computer code PICAN (Probabilistic Integrated Composite ANalyzer). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in the material properties of a typical laminate and comparing the results with the Monte Carlo simulation method. The experimental data of composite material properties at all scales fall within the scatters predicted by PICAN.
Passive and semi-active heave compensator: Project design methodology and control strategies
Cuellar Sanchez, William Humberto; Neto, André Benine; Fortaleza, Eugênio Libório Feitosa
2017-01-01
Heave compensator is a system that mitigates transmission of heave movement from vessels to the equipment in the vessel. In drilling industry, a heave compensator enables drilling in offshore environments. Heave compensator attenuates movement transmitted from the vessel to the drill string and drill bit ensuring security and efficiency of the offshore drilling process. Common types of heave compensators are passive, active and semi-active compensators. This article presents 4 main points. First, a bulk modulus analysis obtains a simple condition to determine if the bulk modulus can be neglected in the design of hydropneumatic passive heave compensator. Second, the methodology to design passive heave compensators with the desired frequency response. Third, four control methodologies for semi-active heave compensator are tested and compared numerically. Lastly, we show experimental results obtained from a prototype with the methodology developed to design passive heave compensator. PMID:28813494
Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.
de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf
2014-01-11
With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.
Optimum surface roughness prediction for titanium alloy by adopting response surface methodology
NASA Astrophysics Data System (ADS)
Yang, Aimin; Han, Yang; Pan, Yuhang; Xing, Hongwei; Li, Jinze
Titanium alloy has been widely applied in industrial engineering products due to its advantages of great corrosion resistance and high specific strength. This paper investigated the processing parameters for finish turning of titanium alloy TC11. Firstly, a three-factor central composite design of experiment, considering the cutting speed, feed rate and depth of cut, are conducted in titanium alloy TC11 and the corresponding surface roughness are obtained. Then a mathematic model is constructed by the response surface methodology to fit the relationship between the process parameters and the surface roughness. The prediction accuracy was verified by the one-way ANOVA. Finally, the contour line of the surface roughness under different combination of process parameters are obtained and used for the optimum surface roughness prediction. Verification experimental results demonstrated that material removal rate (MRR) at the obtained optimum can be significantly improved without sacrificing the surface roughness.
Numerical and experimental modelling of the radial compressor stage
NASA Astrophysics Data System (ADS)
Syka, Tomáš; Matas, Richard; LuÅáček, Ondřej
2016-06-01
This article deals with the description of the numerical and experimental model of the new compressor stage designed for process centrifugal compressors. It's the first member of the new stages family developed to achieve the state of the art thermodynamic parameters. This stage (named RTK01) is designed for high flow coefficient with 3D shaped impeller blades. Some interesting findings were gained during its development. The article is focused mainly on some interesting aspects of the development methodology and numerical simulations improvement, not on the specific stage properties. Conditions and experimental equipment, measured results and their comparison with ANSYS CFX and NUMECA FINE/Turbo CFD simulations are described.
Improved Processes to Remove Naphthenic Acids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aihua Zhang; Qisheng Ma; Kangshi Wang
2005-12-09
In the past three years, we followed the work plan as we suggested in the proposal and made every efforts to fulfill the project objectives. Based on our large amount of creative and productive work, including both of experimental and theoretic aspects, we received important technical breakthrough on naphthenic acid removal process and obtained deep insight on catalytic decarboxylation chemistry. In detail, we established an integrated methodology to serve for all of the experimental and theoretical work. Our experimental investigation results in discovery of four type effective catalysts to the reaction of decarboxylation of model carboxylic acid compounds. The adsorptionmore » experiment revealed the effectiveness of several solid materials to naphthenic acid adsorption and acidity reduction of crude oil, which can be either natural minerals or synthesized materials. The test with crude oil also received promising results, which can be potentially developed into a practical process for oil industry. The theoretical work predicted several possible catalytic decarboxylation mechanisms that would govern the decarboxylation pathways depending on the type of catalysts being used. The calculation for reaction activation energy was in good agreement with our experimental measurements.« less
The influence of surface-active agents in gas mixture on the intensity of jet condensation
NASA Astrophysics Data System (ADS)
Yezhov, YV; Okhotin, VS
2017-11-01
The report presents: the methodology of calculation of contact condensation of steam from the steam-gas mixture into the stream of water, taking into account: the mass flow of steam through the boundary phase, particularly the change in turbulent transport properties near the interface and their connection to the interface perturbations due to the surface tension of the mixture; the method of calculation of the surface tension at the interface water - a mixture of fluorocarbon vapor and water, based on the previously established analytical methods we calculate the surface tension for simple one - component liquid-vapor systems. The obtained analytical relation to calculate the surface tension of the mixture is a function of temperature and volume concentration of the fluorocarbon gas in the mixture and is true for all sizes of gas molecules. On the newly created experimental stand is made verification of experimental studies to determine the surface tension of pure substances: water, steam, C3F8 pair C3F8, produced the first experimental data on surface tension at the water - a mixture of water vapor and fluorocarbon C3F8. The obtained experimental data allow us to refine the values of the two constants used in the calculated model of the surface tension of the mixture. Experimental study of jet condensation was carried out with the flow in the zone of condensation of different gases. The condensation process was monitored by measurement of consumption of water flowing from the nozzle, and the formed condensate. When submitting C3F8, there was a noticeable, intensification condensation process compared with the condensation of pure water vapor. The calculation results are in satisfactory agreement with the experimental data on surface tension of the mixture and steam condensation from steam-gas mixture. Analysis of calculation results shows that the presence of surfactants in the condensation zone affects the partial vapor pressure on the interfacial surface, and the thermal conductivity of the liquid jet. The first circumstance leads to deterioration of the condensation process, the second to the intensification of this process. There is obviously an optimum value of concentration of the additive surfactants to the vapour when the condensation process is maximum. According to the developed design methodology contact condensation can evaluate these optimum conditions, their practical effect in the field study.
Optimization of palm fruit sterilization by microwave irradiation using response surface methodology
NASA Astrophysics Data System (ADS)
Sarah, M.; Madinah, I.; Salamah, S.
2018-02-01
This study reported optimization of palm fruit sterilization process by microwave irradiation. The results of fractional factorial experiments showed no significant external factors affecting temperature of microwave sterilization (MS). Response surface methodology (RSM) was employed and model equation of MS of palm fruit was built. Response surface plots and their corresponding contour plots were analyzed as well as solving model equation. The optimum process parameters for lipase reduction were obtained from MS of 1 kg palm fruit at microwave power of 486 Watt and heating time of 14 minutes. The experimental results showed reduction of lipase activity in the present work under MS treatment. The adequacy of the model equation for predicting the optimum response value was verified by validation data (P>0.15).
Current status and future prospects for enabling chemistry technology in the drug discovery process.
Djuric, Stevan W; Hutchins, Charles W; Talaty, Nari N
2016-01-01
This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of "dangerous" reagents. Also featured are advances in the "computer-assisted drug design" area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
Mazaheri, Hossein; Lee, Keat Teong; Bhatia, Subhash; Mohamed, Abdul Rahman
2010-12-01
Thermal decomposition of oil palm fruit press fiber (FPF) into a liquid product (LP) was achieved using subcritical water treatment in the presence of sodium hydroxide in a high pressure batch reactor. This study uses experimental design and process optimisation tools to maximise the LP yield using response surface methodology (RSM) with central composite rotatable design (CCRD). The independent variables were temperature, residence time, particle size, specimen loading, and additive loading. The mathematical model that was developed fit the experimental results well for all of the response variables that were studied. The optimal conditions were found to be a temperature of 551 K, a residence time of 40 min, a particle size of 710-1000 microm, a specimen loading of 5 g, and a additive loading of 9 wt.% to achieve a LP yield of 76.16%. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abdelaziz, Chebboubi; Grégoire, Kessedjian; Olivier, Serot; Sylvain, Julien-Laferriere; Christophe, Sage; Florence, Martin; Olivier, Méplan; David, Bernard; Olivier, Litaize; Aurélien, Blanc; Herbert, Faust; Paolo, Mutti; Ulli, Köster; Alain, Letourneau; Thomas, Materna; Michal, Rapala
2017-09-01
The study of fission yields has a major impact on the characterization and understanding of the fission process and is mandatory for reactor applications. In the past with the LOHENGRIN spectrometer of the ILL, priority has been given for the studies in the light fission fragment mass range. The LPSC in collaboration with ILL and CEA has developed a measurement program on symmetric and heavy mass fission fragment distributions. The combination of measurements with ionisation chamber and Ge detectors is necessary to describe precisely the heavy fission fragment region in mass and charge. Recently, new measurements of fission yields and kinetic energy distributions are has been made on the 233U(nth,f) reaction. The focus of this work has been on the new optical and statistical methodology and the self-normalization of the data to provide new absolute measurements, independently of any libraries, and the associated experimental covariance matrix.
Integration, warehousing, and analysis strategies of Omics data.
Gedela, Srinubabu
2011-01-01
"-Omics" is a current suffix for numerous types of large-scale biological data generation procedures, which naturally demand the development of novel algorithms for data storage and analysis. With next generation genome sequencing burgeoning, it is pivotal to decipher a coding site on the genome, a gene's function, and information on transcripts next to the pure availability of sequence information. To explore a genome and downstream molecular processes, we need umpteen results at the various levels of cellular organization by utilizing different experimental designs, data analysis strategies and methodologies. Here comes the need for controlled vocabularies and data integration to annotate, store, and update the flow of experimental data. This chapter explores key methodologies to merge Omics data by semantic data carriers, discusses controlled vocabularies as eXtensible Markup Languages (XML), and provides practical guidance, databases, and software links supporting the integration of Omics data.
NASA Astrophysics Data System (ADS)
Nath, Nayani Kishore
2017-08-01
The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.
Buratti, C; Barbanera, M; Lascaro, E; Cotana, F
2018-03-01
The aim of the present study is to analyze the influence of independent process variables such as temperature, residence time, and heating rate on the torrefaction process of coffee chaff (CC) and spent coffee grounds (SCGs). Response surface methodology and a three-factor and three-level Box-Behnken design were used in order to evaluate the effects of the process variables on the weight loss (W L ) and the Higher Heating Value (HHV) of the torrefied materials. Results showed that the effects of the three factors on both responses were sequenced as follows: temperature>residence time>heating rate. Data obtained from the experiments were analyzed by analysis of variance (ANOVA) and fitted to second-order polynomial models by using multiple regression analysis. Predictive models were determined, able to obtain satisfactory fittings of the experimental data, with coefficient of determination (R 2 ) values higher than 0.95. An optimization study using Derringer's desired function methodology was also carried out and the optimal torrefaction conditions were found: temperature 271.7°C, residence time 20min, heating rate 5°C/min for CC and 256.0°C, 20min, 25°C/min for SCGs. The experimental values closely agree with the corresponding predicted values. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nasri Nasrabadi, Mohammad Reza; Razavi, Seyed Hadi
2010-04-01
In this work, we applied statistical experimental design to a fed-batch process for optimization of tricarboxylic acid cycle (TCA) intermediates in order to achieve high-level production of canthaxanthin from Dietzia natronolimnaea HS-1 cultured in beet molasses. A fractional factorial design (screening test) was first conducted on five TCA cycle intermediates. Out of the five TCA cycle intermediates investigated via screening tests, alfaketoglutarate, oxaloacetate and succinate were selected based on their statistically significant (P<0.05) and positive effects on canthaxanthin production. These significant factors were optimized by means of response surface methodology (RSM) in order to achieve high-level production of canthaxanthin. The experimental results of the RSM were fitted with a second-order polynomial equation by means of a multiple regression technique to identify the relationship between canthaxanthin production and the three TCA cycle intermediates. By means of this statistical design under a fed-batch process, the optimum conditions required to achieve the highest level of canthaxanthin (13172 + or - 25 microg l(-1)) were determined as follows: alfaketoglutarate, 9.69 mM; oxaloacetate, 8.68 mM; succinate, 8.51 mM. Copyright 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Encapsulation Processing and Manufacturing Yield Analysis
NASA Technical Reports Server (NTRS)
Willis, P.
1985-01-01
Evaluation of the ethyl vinyl acetate (EVA) encapsulation system is presented. This work is part of the materials baseline needed to demonstrate a 30 year module lifetime capability. Process and compound variables are both being studied along with various module materials. Results have shown that EVA should be stored rolled up, and enclosed in a plastic bag to retard loss of peroxide curing agents. The TBEC curing agent has superior shelf life and processing than the earlier Lupersol-101 curing agent. Analytical methods were developed to test for peroxide content, and experimental methodologies were formalized.
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Hu, Shengyang; Wen, Libai; Wang, Yun; Zheng, Xinsheng; Han, Heyou
2012-11-01
A continuous-flow integration process was developed for biodiesel production using rapeseed oil as feedstock, based on the countercurrent contact reaction between gas and liquid, separation of glycerol on-line and cyclic utilization of methanol. Orthogonal experimental design and response surface methodology were adopted to optimize technological parameters. A second-order polynomial model for the biodiesel yield was established and validated experimentally. The high determination coefficient (R(2)=98.98%) and the low probability value (Pr<0.0001) proved that the model matched the experimental data, and had a high predictive ability. The optimal technological parameters were: 81.5°C reaction temperature, 51.7cm fill height of catalyst KF/CaO and 105.98kPa system pressure. Under these conditions, the average yield of triplicate experiments was 93.7%, indicating the continuous-flow process has good potential in the manufacture of biodiesel. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ito, Vanessa Mayumi; Batistella, César Benedito; Maciel, Maria Regina Wolf; Maciel Filho, Rubens
2007-04-01
Soybean oil deodorized distillate is a product derived from the refining process and it is rich in high value-added products. The recovery of these unsaponifiable fractions is of great commercial interest, because of the fact that in many cases, the "valuable products" have vitamin activities such as tocopherols (vitamin E), as well as anticarcinogenic properties such as sterols. Molecular distillation has large potential to be used in order to concentrate tocopherols, as it uses very low temperatures owing to the high vacuum and short operating time for separation, and also, it does not use solvents. Then, it can be used to separate and to purify thermosensitive material such as vitamins. In this work, the molecular distillation process was applied for tocopherol concentration, and the response surface methodology was used to optimize free fatty acids (FFA) elimination and tocopherol concentration in the residue and in the distillate streams, both of which are the products of the molecular distiller. The independent variables studied were feed flow rate (F) and evaporator temperature (T) because they are the very important process variables according to previous experience. The experimental range was 4-12 mL/min for F and 130-200 degrees C for T. It can be noted that feed flow rate and evaporator temperature are important operating variables in the FFA elimination. For decreasing the loss of FFA, in the residue stream, the operating range should be changed, increasing the evaporator temperature and decreasing the feed flow rate; D/F ratio increases, increasing evaporator temperature and decreasing feed flow rate. High concentration of tocopherols was obtained in the residue stream at low values of feed flow rate and high evaporator temperature. These results were obtained through experimental results based on experimental design.
Wang, Xiao-Yan; Ren, Hui
2018-03-21
Ginseng stems and leaves (GSAL) are abundant in ginsenosides compounds. For efficient utilization of GSAL and the enhancement of total ginsenosides (TG) compound yields in GSAL, TG from GSAL were extracted, using dynamic-microwave assisted extraction coupled with enzymatic hydrolysis (DMAE-EH) method. The extraction process has been simulated and its main influencing factors such as ethanol concentration, microwave temperature, microwave time and pump flow rate have been optimized by response surface methodology coupled with a Box-Behnken design(BBD). The experimental results indicated that optimal extraction conditions of TG from GSAL were as follows: ethanol concentration of 75%, microwave temperature of 60°C, microwave time of 20 min and pump flow rate of 38 r/min. After experimental verification, the experimental yields of TG was 60.62 ± 0.85 mg g -1 , which were well agreement with the predicted by the model. In general, the present results demonstrated that DMAE-EH method was successfully used to extract total ginsenosides in GSAL.
Overcoming an obstacle in expanding a UMLS semantic type extent.
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2012-02-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. Copyright © 2011 Elsevier Inc. All rights reserved.
Overcoming an Obstacle in Expanding a UMLS Semantic Type Extent
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2011-01-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. PMID:21925287
NASA Astrophysics Data System (ADS)
Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza
2012-06-01
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.
1986-01-01
Chapter it Research Methodology This chapter describes the methodology and the experimental design used for this research. Prior to discussing the...50 Experimental Design ............................... 50 Task/Treatm ent ................................... 55 Task Design ...Figure 3.3 Interface Experiment Elements ............... 54 Figure 3.4 Experimental Design ....................... 55 Figure 3.5 Subject Assignment
Bollen, Jessica; Trick, Leanne; Llewellyn, David; Dickens, Chris
2017-03-01
The cognitive neuropsychological model of depression proposes that negative biases in the processing of emotionally salient information have a central role in the development and maintenance of depression. We have conducted a systematic review to determine whether acute experimental inflammation is associated with changes to cognitive and emotional processing that are thought to cause and maintain depression. We identified experimental studies in which healthy individuals were administered an acute inflammatory challenge (bacterial endotoxin/vaccination) and standardised tests of cognitive function were performed. Fourteen references were identified, reporting findings from 12 independent studies on 345 participants. Methodological quality was rated strong or moderate for 11 studies. Acute experimental inflammation was triggered using a variety of agents (including endotoxin from E. coli, S. typhi, S. abortus Equi and Hepatitis B vaccine) and cognition was assessed over hours to months, using cognitive tests of i) attention/executive functioning, ii) memory and iii) social/emotional processing. Studies found mixed evidence that acute experimental inflammation caused changes to attention/executive functioning (2 of 6 studies showed improvements in attention executive function compared to control), changes in memory (3 of 5 studies; improved reaction time: reduced memory for object proximity: poorer immediate and delayed memory) and changes to social/emotional processing (4 of 5 studies; reduced perception of emotions, increased avoidance of punishment/loss experiences, and increased social disconnectedness). Acute experimental inflammation causes negative biases in social and emotional processing that could explain observed associations between inflammation and depression. Copyright © 2017 Elsevier Inc. All rights reserved.
Range pattern matching with layer operations and continuous refinements
NASA Astrophysics Data System (ADS)
Tseng, I.-Lun; Lee, Zhao Chuan; Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Ong, Jonathan Yoong Seang
2018-03-01
At advanced and mainstream process nodes (e.g., 7nm, 14nm, 22nm, and 55nm process nodes), lithography hotspots can exist in layouts of integrated circuits even if the layouts pass design rule checking (DRC). Existence of lithography hotspots in a layout can cause manufacturability issues, which can result in yield losses of manufactured integrated circuits. In order to detect lithography hotspots existing in physical layouts, pattern matching (PM) algorithms and commercial PM tools have been developed. However, there are still needs to use DRC tools to perform PM operations. In this paper, we propose a PM synthesis methodology, which uses a continuous refinement technique, for the automatic synthesis of a given lithography hotspot pattern into a DRC deck, which consists of layer operation commands, so that an equivalent PM operation can be performed by executing the synthesized deck with the use of a DRC tool. Note that the proposed methodology can deal with not only exact patterns, but also range patterns. Also, lithography hotspot patterns containing multiple layers can be processed. Experimental results show that the proposed methodology can accurately and efficiently detect lithography hotspots in physical layouts.
Current status and future prospects for enabling chemistry technology in the drug discovery process
Djuric, Stevan W.; Hutchins, Charles W.; Talaty, Nari N.
2016-01-01
This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of “dangerous” reagents. Also featured are advances in the “computer-assisted drug design” area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities. PMID:27781094
Enzymatic catalysis treatment method of meat industry wastewater using lacasse.
Thirugnanasambandham, K; Sivakumar, V
2015-01-01
The process of meat industry produces in a large amount of wastewater that contains high levels of colour and chemical oxygen demand (COD). So they must be pretreated before their discharge into the ecological system. In this paper, enzymatic catalysis (EC) was adopted to treat the meat wastewater. Box-Behnken design (BBD), an experimental design for response surface methodology (RSM), was used to create a set of 29 experimental runs needed for optimizing of the operating conditions. Quadratic regression models with estimated coefficients were developed to describe the colour and COD removals. The experimental results show that EC could effectively reduce colour (95 %) and COD (86 %) at the optimum conditions of enzyme dose of 110 U/L, incubation time of 100 min, pH of 7 and temperature of 40 °C. RSM could be effectively adopted to optimize the operating multifactors in complex EC process.
NASA Astrophysics Data System (ADS)
Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.
2016-09-01
The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.
Catelani, Tiago A; Santos, João Rodrigo; Páscoa, Ricardo N M J; Pezza, Leonardo; Pezza, Helena R; Lopes, João A
2018-03-01
This work proposes the use of near infrared (NIR) spectroscopy in diffuse reflectance mode and multivariate statistical process control (MSPC) based on principal component analysis (PCA) for real-time monitoring of the coffee roasting process. The main objective was the development of a MSPC methodology able to early detect disturbances to the roasting process resourcing to real-time acquisition of NIR spectra. A total of fifteen roasting batches were defined according to an experimental design to develop the MSPC models. This methodology was tested on a set of five batches where disturbances of different nature were imposed to simulate real faulty situations. Some of these batches were used to optimize the model while the remaining was used to test the methodology. A modelling strategy based on a time sliding window provided the best results in terms of distinguishing batches with and without disturbances, resourcing to typical MSPC charts: Hotelling's T 2 and squared predicted error statistics. A PCA model encompassing a time window of four minutes with three principal components was able to efficiently detect all disturbances assayed. NIR spectroscopy combined with the MSPC approach proved to be an adequate auxiliary tool for coffee roasters to detect faults in a conventional roasting process in real-time. Copyright © 2017 Elsevier B.V. All rights reserved.
Inferring Molecular Processes Heterogeneity from Transcriptional Data.
Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.
Inferring Molecular Processes Heterogeneity from Transcriptional Data
Wronowska, Weronika; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs. PMID:29362714
Asadzadeh, Farrokh; Maleki-Kaklar, Mahdi; Soiltanalinejad, Nooshin; Shabani, Farzin
2018-02-08
Citric acid (CA) was evaluated in terms of its efficiency as a biodegradable chelating agent, in removing zinc (Zn) from heavily contaminated soil, using a soil washing process. To determine preliminary ranges of variables in the washing process, single factor experiments were carried out with different CA concentrations, pH levels and washing times. Optimization of batch washing conditions followed using a response surface methodology (RSM) based central composite design (CCD) approach. CCD predicted values and experimental results showed strong agreement, with an R 2 value of 0.966. Maximum removal of 92.8% occurred with a CA concentration of 167.6 mM, pH of 4.43, and washing time of 30 min as optimal variable values. A leaching column experiment followed, to examine the efficiency of the optimum conditions established by the CCD model. A comparison of two soil washing techniques indicated that the removal efficiency rate of the column experiment (85.8%) closely matching that of the batch experiment (92.8%). The methodology supporting the research experimentation for optimizing Zn removal may be useful in the design of protocols for practical engineering soil decontamination applications.
Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
The warpage is often encountered which occur during injection moulding process of thin shell part depending the process condition. The statistical design of experiment method which are Integrating Finite Element (FE) Analysis, moldflow analysis and response surface methodology (RSM) are the stage of few ways in minimize the warpage values of x,y and z on thin shell plastic parts that were investigated. A battery cover of a remote controller is one of the thin shell plastic part that produced by using injection moulding process. The optimum process condition parameter were determined as to achieve the minimum warpage from being occur. Packing pressure, Cooling time, Melt temperature and Mould temperature are 4 parameters that considered in this study. A two full factorial experimental design was conducted in Design Expert of RSM analysis as to combine all these parameters study. FE analysis result gain from analysis of variance (ANOVA) method was the one of the important process parameters influenced warpage. By using RSM, a predictive response surface model for warpage data will be shown.
Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation
NASA Astrophysics Data System (ADS)
Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr
2017-12-01
Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the other hand, the described setup along with the presented methodology is a considerable improvement and an extension of methods constituting the state-of-the-art in the related field. Device flexibility both with developed analysis methodology can lead to further development of diagnostic methods and provide deeper insight into information processing in the human brain.
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
NASA Astrophysics Data System (ADS)
Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.
2014-03-01
In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.
Plot-scale field experiment of surface hydrologic processes with EOS implications
NASA Technical Reports Server (NTRS)
Laymon, Charles A.; Macari, Emir J.; Costes, Nicholas C.
1992-01-01
Plot-scale hydrologic field studies were initiated at NASA Marshall Space Flight Center to a) investigate the spatial and temporal variability of surface and subsurface hydrologic processes, particularly as affected by vegetation, and b) develop experimental techniques and associated instrumentation methodology to study hydrologic processes at increasingly large spatial scales. About 150 instruments, most of which are remotely operated, have been installed at the field site to monitor ground atmospheric conditions, precipitation, interception, soil-water status, and energy flux. This paper describes the nature of the field experiment, instrumentation and sampling rationale, and presents preliminary findings.
An enhanced methodology for spacecraft correlation activity using virtual testing tools
NASA Astrophysics Data System (ADS)
Remedia, Marcello; Aglietti, Guglielmo S.; Appolloni, Matteo; Cozzani, Alessandro; Kiley, Andrew
2017-11-01
Test planning and post-test correlation activity have been issues of growing importance in the last few decades and many methodologies have been developed to either quantify or improve the correlation between computational and experimental results. In this article the methodologies established so far are enhanced with the implementation of a recently developed procedure called Virtual Testing. In the context of fixed-base sinusoidal tests (commonly used in the space sector for correlation), there are several factors in the test campaign that affect the behaviour of the satellite and are not normally taken into account when performing analyses: different boundary conditions created by the shaker's own dynamics, non-perfect control system, signal delays etc. All these factors are the core of the Virtual Testing implementation, which will be thoroughly explained in this article and applied to the specific case of Bepi-Colombo spacecraft tested on the ESA QUAD Shaker. Correlation activity will be performed in the various stages of the process, showing important improvements observed after applying the final complete methodology.
Statistical and engineering methods for model enhancement
NASA Astrophysics Data System (ADS)
Chang, Chia-Jung
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
Identification of nonlinear normal modes of engineering structures under broadband forcing
NASA Astrophysics Data System (ADS)
Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.
2016-06-01
The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.
Conceptual and Preliminary Design of a Low-Cost Precision Aerial Delivery System
2016-06-01
test results. It includes an analysis of the failure modes encountered during flight experimentation , methodology used for conducting coordinate...and experimentation . Additionally, the current and desired end state of the research is addressed. Finally, this chapter outlines the methodology ...preliminary design phases are utilized to investigate and develop a potentially low-cost alternative to existing systems. Using an Agile methodology
NASA Astrophysics Data System (ADS)
Gautam, Girish Dutt; Pandey, Arun Kumar
2018-03-01
Kevlar is the most popular aramid fiber and most commonly used in different technologically advanced industries for various applications. But the precise cutting of Kevlar composite laminates is a difficult task. The conventional cutting methods face various defects such as delamination, burr formation, fiber pullout with poor surface quality and their mechanical performance is greatly affected by these defects. The laser beam machining may be an alternative of the conventional cutting processes due to its non-contact nature, requirement of low specific energy with higher production rate. But this process also faces some problems that may be minimized by operating the machine at optimum parameters levels. This research paper examines the effective utilization of the Nd:YAG laser cutting system on difficult-to-cut Kevlar-29 composite laminates. The objective of the proposed work is to find the optimum process parameters settings for getting the minimum kerf deviations at both sides. The experiments have been conducted on Kevlar-29 composite laminates having thickness 1.25 mm by using Box-Benkhen design with two center points. The experimental data have been used for the optimization by using the proposed methodology. For the optimization, Teaching learning Algorithm based approach has been employed to obtain the minimum kerf deviation at bottom and top sides. A self coded Matlab program has been developed by using the proposed methodology and this program has been used for the optimization. Finally, the confirmation tests have been performed to compare the experimental and optimum results obtained by the proposed methodology. The comparison results show that the machining performance in the laser beam cutting process has been remarkably improved through proposed approach. Finally, the influence of different laser cutting parameters such as lamp current, pulse frequency, pulse width, compressed air pressure and cutting speed on top kerf deviation and bottom kerf deviation during the Nd:YAG laser cutting of Kevlar-29 laminates have been discussed.
NASA Astrophysics Data System (ADS)
Torres-Arredondo, M.-A.; Sierra-Pérez, Julián; Cabanes, Guénaël
2016-05-01
The process of measuring and analysing the data from a distributed sensor network all over a structural system in order to quantify its condition is known as structural health monitoring (SHM). For the design of a trustworthy health monitoring system, a vast amount of information regarding the inherent physical characteristics of the sources and their propagation and interaction across the structure is crucial. Moreover, any SHM system which is expected to transition to field operation must take into account the influence of environmental and operational changes which cause modifications in the stiffness and damping of the structure and consequently modify its dynamic behaviour. On that account, special attention is paid in this paper to the development of an efficient SHM methodology where robust signal processing and pattern recognition techniques are integrated for the correct interpretation of complex ultrasonic waves within the context of damage detection and identification. The methodology is based on an acousto-ultrasonics technique where the discrete wavelet transform is evaluated for feature extraction and selection, linear principal component analysis for data-driven modelling and self-organising maps for a two-level clustering under the principle of local density. At the end, the methodology is experimentally demonstrated and results show that all the damages were detectable and identifiable.
Morón-Castañeda, L H; Useche-Bernal, A; Morales-Reyes, O L; Mojica-Figueroa, I L; Palacios-Carlos, A; Ardila-Gómez, C E; Parra-Ardila, M V; Martínez-Nieto, O; Sarmiento-Echeverri, N; Rodríguez, C A; Alvarado-Heine, C; Isaza-Ruget, M A
2015-01-01
The application of the Lean methodology in health institutions is an effective tool to improve the capacity and workflow, as well as to increase the level of satisfaction of patients and employees. To optimise the time of outpatient care in a clinical laboratory, by implementing a methodology based on the organisation of operational procedures to improve user satisfaction and reduce the number of complaints for delays in care. A quasi-experimental before and after study was conducted between October 2011 to September 2012. XBar and S charts were used to observe the mean service times and standard deviation. The user satisfaction was assessed using service questionnaires. A reduction of 17 minutes was observed in the time of patient care from arrival to leaving the laboratory, and a decrease of 60% in complaints of delay in care. Despite the high staff turnover and 38% increase in the number of patients seen, a culture of empowerment and continuous improvement was acquired, as well as greater efficiency and productivity in the care process, which was reflected by maintaining standards 12 months after implementation. Lean is a viable methodology for clinical laboratory procedures, improving their efficiency and effectiveness. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Methodological Concerns in Experimental Reading Research: All That Glitters...
ERIC Educational Resources Information Center
Henk, William A.
1987-01-01
Describes the nature and consequences of liberally or improperly applying the traditional reading research methodology and provides an argument for tempering judgments about the relative contributions that experimental studies make to the professional literature in reading. (SKC)
Execution Of Systems Integration Principles During Systems Engineering Design
2016-09-01
This thesis discusses integration failures observed by DOD and non - DOD systems as, inadequate stakeholder analysis, incomplete problem space and design ... design , development, test and deployment of a system. A lifecycle structure consists of phases within a methodology or process model. There are many...investigate design decisions without the need to commit to physical forms; “ experimental investigation using a model yields design or operational
Jaciw, Andrew P
2016-06-01
Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.
Zeeman, Heidi; Kendall, Elizabeth; Whitty, Jennifer A; Wright, Courtney J; Townsend, Clare; Smith, Dianne; Lakhani, Ali; Kennerley, Samantha
2016-03-15
Identifying the housing preferences of people with complex disabilities is a much needed, but under-developed area of practice and scholarship. Despite the recognition that housing is a social determinant of health and quality of life, there is an absence of empirical methodologies that can practically and systematically involve consumers in this complex service delivery and housing design market. A rigorous process for making effective and consistent development decisions is needed to ensure resources are used effectively and the needs of consumers with complex disability are properly met. This 3-year project aims to identify how the public and private housing market in Australia can better respond to the needs of people with complex disabilities whilst simultaneously achieving key corporate objectives. First, using the Customer Relationship Management framework, qualitative (Nominal Group Technique) and quantitative (Discrete Choice Experiment) methods will be used to quantify the housing preferences of consumers and their carers. A systematic mixed-method, quasi-experimental design will then be used to quantify the development priorities of other key stakeholders (e.g., architects, developers, Government housing services etc.) in relation to inclusive housing for people with complex disabilities. Stakeholders randomly assigned to Group 1 (experimental group) will participate in a series of focus groups employing Analytical Hierarchical Process (AHP) methodology. Stakeholders randomly assigned to Group 2 (control group) will participate in focus groups employing existing decision making processes to inclusive housing development (e.g., Risk, Opportunity, Cost, Benefit considerations). Using comparative stakeholder analysis, this research design will enable the AHP methodology (a proposed tool to guide inclusive housing development decisions) to be tested. It is anticipated that the findings of this study will enable stakeholders to incorporate consumer housing preferences into commercial decisions. Housing designers and developers will benefit from the creation of a parsimonious set of consumer-led housing preferences by which to make informed investments in future housing and contribute to future housing policy. The research design has not been applied in the Australian research context or elsewhere, and will provide a much needed blueprint for market investment to develop viable, consumer directed inclusive housing options for people with complex disability.
NASA Astrophysics Data System (ADS)
Shaylinda, M. Z. N.; Hamidi, A. A.; Mohd, N. A.; Ariffin, A.; Irvan, D.; Hazreek, Z. A. M.; Nizam, Z. M.
2018-04-01
In this research, the performance of polyferric chloride and tapioca flour as composite coagulants for partially stabilized leachate was investigated. Response surface methodology (RSM) was used to optimize the coagulation and flocculation process of partially stabilized leachate. Central composite design a standard design tool in RSM was applied to evaluate the interactions and effects of dose and pH. Dose 0.2 g/L Fe and pH 4.71 were the optimum value suggested by RSM. Experimental test based on the optimum condition, resulted in 95.9%, 94.6% and 50.4% of SS, color and COD removals, respectively. The percentage difference recorded between experimental and model responses was <5%. Therefore, it can be concluded that RSM was an appropriate optimization tool for coagulation and flocculation process.
Metamaterial bricks and quantization of meta-surfaces
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram
2017-01-01
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units—which we call metamaterial bricks—each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators. PMID:28240283
Rébufa, Catherine; Pany, Inès; Bombarda, Isabelle
2018-09-30
A rapid methodology was developed to simultaneously predict water content and activity values (a w ) of Moringa oleifera leaf powders (MOLP) using near infrared (NIR) signatures and experimental sorption isotherms. NIR spectra of MOLP samples (n = 181) were recorded. A Partial Least Square Regression model (PLS2) was obtained with low standard errors of prediction (SEP of 1.8% and 0.07 for water content and a w respectively). Experimental sorption isotherms obtained at 20, 30 and 40 °C showed similar profiles. This result is particularly important to use MOLP in food industry. In fact, a temperature variation of the drying process will not affect their available water content (self-life). Nutrient contents based on protein and selected minerals (Ca, Fe, K) were also predicted from PLS1 models. Protein contents were well predicted (SEP of 2.3%). This methodology allowed for an improvement in MOLP safety, quality control and traceability. Published by Elsevier Ltd.
Metamaterial bricks and quantization of meta-surfaces
NASA Astrophysics Data System (ADS)
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram
2017-02-01
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units--which we call metamaterial bricks--each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.
NASA Astrophysics Data System (ADS)
Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.
2011-05-01
Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.
Gorlin, Yelena; Jaramillo, Thomas F.
2014-01-01
The selection of an appropriate substrate is an important initial step for many studies of electrochemically active materials. In order to help researchers with the substrate selection process, we employ a consistent experimental methodology to evaluate the electrochemical reactivity and stability of seven potential substrate materials for electrocatalyst and photoelectrode evaluation. Using cyclic voltammetry with a progressively increased scan range, we characterize three transparent conducting oxides (indium tin oxide, fluorine-doped tin oxide, and aluminum-doped zinc oxide) and four opaque conductors (gold, stainless steel 304, glassy carbon, and highly oriented pyrolytic graphite) in three different electrolytes (sulfuric acid, sodium acetate, and sodium hydroxide). We determine the inert potential window for each substrate/electrolyte combination and make recommendations about which materials may be most suitable for application under different experimental conditions. Furthermore, the testing methodology provides a framework for other researchers to evaluate and report the baseline activity of other substrates of interest to the broader community. PMID:25357131
A new approach to synthesis of benzyl cinnamate: Optimization by response surface methodology.
Zhang, Dong-Hao; Zhang, Jiang-Yan; Che, Wen-Cai; Wang, Yun
2016-09-01
In this work, the new approach to synthesis of benzyl cinnamate by enzymatic esterification of cinnamic acid with benzyl alcohol is optimized by response surface methodology. The effects of various reaction conditions, including temperature, enzyme loading, substrate molar ratio of benzyl alcohol to cinnamic acid, and reaction time, are investigated. A 5-level-4-factor central composite design is employed to search for the optimal yield of benzyl cinnamate. A quadratic polynomial regression model is used to analyze the experimental data at a 95% confidence level (P<0.05). The coefficient of determination of this model is found to be 0.9851. Three sets of optimum reaction conditions are established, and the verified experimental trials are performed for validating the optimum points. Under the optimum conditions (40°C, 31mg/mL enzyme loading, 2.6:1 molar ratio, 27h), the yield reaches 97.7%, which provides an efficient processes for industrial production of benzyl cinnamate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Metamaterial bricks and quantization of meta-surfaces.
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R; Drinkwater, Bruce W; Subramanian, Sriram
2017-02-27
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units-which we call metamaterial bricks-each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.
A method to identify and analyze biological programs through automated reasoning
Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen
2016-01-01
Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090
Nanoscale Fe/Ag particles activated persulfate: optimization using response surface methodology.
Silveira, Jefferson E; Barreto-Rodrigues, Marcio; Cardoso, Tais O; Pliego, Gema; Munoz, Macarena; Zazo, Juan A; Casas, José A
2017-05-01
This work studied the bimetallic nanoparticles Fe-Ag (nZVI-Ag) activated persulfate (PS) in aqueous solution using response surface methodology. The Box-Behnken design (BBD) was employed to optimize three parameters (nZVI-Ag dose, reaction temperature, and PS concentration) using 4-chlorophenol (4-CP) as the target pollutant. The synthesis of nZVI-Ag particles was carried out through a reduction of FeCl 2 with NaBH 4 followed by reductive deposition of Ag. The catalyst was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and Brunauer-Emmett-Teller (BET) surface area. The BBD was considered a satisfactory model to optimize the process. Confirmatory tests were carried out using predicted and experimental values under the optimal conditions (50 mg L -1 nZVI-Ag, 21 mM PS at 57 °C) and the complete removal of 4-CP achieved experimentally was successfully predicted by the model, whereas the mineralization degree predicted (90%) was slightly overestimated against the measured data (83%).
Benck, Jesse D.; Pinaud, Blaise A.; Gorlin, Yelena; ...
2014-10-30
The selection of an appropriate substrate is an important initial step for many studies of electrochemically active materials. In order to help researchers with the substrate selection process, we employ a consistent experimental methodology to evaluate the electrochemical reactivity and stability of seven potential substrate materials for electrocatalyst and photoelectrode evaluation. Using cyclic voltammetry with a progressively increased scan range, we characterize three transparent conducting oxides (indium tin oxide, fluorine-doped tin oxide, and aluminum-doped zinc oxide) and four opaque conductors (gold, stainless steel 304, glassy carbon, and highly oriented pyrolytic graphite) in three different electrolytes (sulfuric acid, sodium acetate, andmore » sodium hydroxide). Here, we determine the inert potential window for each substrate/electrolyte combination and make recommendations about which materials may be most suitable for application under different experimental conditions. Furthermore, the testing methodology provides a framework for other researchers to evaluate and report the baseline activity of other substrates of interest to the broader community.« less
Arden, Sarah V; Pentimonti, Jill M; Cooray, Rochana; Jackson, Stephanie
2017-07-01
This investigation employs categorical content analysis processes as a mechanism to examine trends and issues in a sampling of highly cited (100+) literature in special education journals. The authors had two goals: (a) broadly identifying trends across publication type, content area, and methodology and (b) specifically identifying articles with disaggregated outcomes for students with learning disabilities (LD). Content analyses were conducted across highly cited (100+) articles published during a 20-year period (1992-2013) in a sample ( n = 3) of journals focused primarily on LD, and in one broad, cross-categorical journal recognized for its impact in the field. Results indicated trends in the article type (i.e., commentary and position papers), content (i.e., reading and behavior), and methodology (i.e., small proportions of experimental and quasi-experimental designs). Results also revealed stability in the proportion of intervention research studies when compared to previous analyses and a decline in the proportion of those that disaggregated data specifically for students with LD.
Linking microbial community structure and microbial processes: An empirical and conceptual overview
Bier, R.L.; Bernhardt, Emily S.; Boot, Claudia M.; Graham, Emily B.; Hall, Edward K.; Lennon, Jay T.; Nemergut, Diana R.; Osborne, Brooke B.; Ruiz-Gonzalez, Clara; Schimel, Joshua P.; Waldrop, Mark P.; Wallenstein, Matthew D.
2015-01-01
A major goal of microbial ecology is to identify links between microbial community structure and microbial processes. Although this objective seems straightforward, there are conceptual and methodological challenges to designing studies that explicitly evaluate this link. Here, we analyzed literature documenting structure and process responses to manipulations to determine the frequency of structure-process links and whether experimental approaches and techniques influence link detection. We examined nine journals (published 2009–13) and retained 148 experimental studies measuring microbial community structure and processes. Many qualifying papers (112 of 148) documented structure and process responses, but few (38 of 112 papers) reported statistically testing for a link. Of these tested links, 75% were significant and typically used Spearman or Pearson's correlation analysis (68%). No particular approach for characterizing structure or processes was more likely to produce significant links. Process responses were detected earlier on average than responses in structure or both structure and process. Together, our findings suggest that few publications report statistically testing structure-process links. However, when links are tested for they often occur but share few commonalities in the processes or structures that were linked and the techniques used for measuring them.
Sakkas, Vasilios A; Islam, Md Azharul; Stalikas, Constantine; Albanis, Triantafyllos A
2010-03-15
The use of chemometric methods such as response surface methodology (RSM) based on statistical design of experiments (DOEs) is becoming increasingly widespread in several sciences such as analytical chemistry, engineering and environmental chemistry. Applied catalysis, is certainly not the exception. It is clear that photocatalytic processes mated with chemometric experimental design play a crucial role in the ability of reaching the optimum of the catalytic reactions. The present article reviews the major applications of RSM in modern experimental design combined with photocatalytic degradation processes. Moreover, the theoretical principles and designs that enable to obtain a polynomial regression equation, which expresses the influence of process parameters on the response are thoroughly discussed. An original experimental work, the photocatalytic degradation of the dye Congo red (CR) using TiO(2) suspensions and H(2)O(2), in natural surface water (river water) is comprehensively described as a case study, in order to provide sufficient guidelines to deal with this subject, in a rational and integrated way. (c) 2009 Elsevier B.V. All rights reserved.
Tesfaye, Tamrat; Sithole, Bruce; Ramjugernath, Deresh; Ndlela, Luyanda
2018-02-01
Commercially processed, untreated chicken feathers are biologically hazardous due to the presence of blood-borne pathogens. Prior to valorisation, it is crucial that they are decontaminated to remove the microbial contamination. The present study focuses on evaluating the best technologies to decontaminate and pre-treat chicken feathers in order to make them suitable for valorisation. Waste chicken feathers were washed with three surfactants (sodium dodecyl sulphate) dimethyl dioctadecyl ammonium chloride, and polyoxyethylene (40) stearate) using statistically designed experiments. Process conditions were optimised using response surface methodology with a Box-Behnken experimental design. The data were compared with decontamination using an autoclave. Under optimised conditions, the microbial counts of the decontaminated and pre-treated chicken feathers were significantly reduced making them safe for handling and use for valorisation applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Erva, Rajeswara Reddy; Goswami, Ajgebi Nath; Suman, Priyanka; Vedanabhatla, Ravali; Rajulapati, Satish Babu
2017-03-16
The culture conditions and nutritional rations influencing the production of extra cellular antileukemic enzyme by novel Enterobacter aerogenes KCTC2190/MTCC111 were optimized in shake-flask culture. Process variables like pH, temperature, incubation time, carbon and nitrogen sources, inducer concentration, and inoculum size were taken into account. In the present study, finest enzyme activity achieved by traditional one variable at a time method was 7.6 IU/mL which was a 2.6-fold increase compared to the initial value. Further, the L-asparaginase production was optimized using response surface methodology, and validated experimental result at optimized process variables gave 18.35 IU/mL of L-asparaginase activity, which is 2.4-times higher than the traditional optimization approach. The study explored the E. aerogenes MTCC111 as a potent and potential bacterial source for high yield of antileukemic drug.
Acoustic evidence for phonologically mismatched speech errors.
Gormley, Andrea
2015-04-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.
Rodrigues, Sueli; Pinto, Gustavo A S; Fernandes, Fabiano A N
2008-01-01
Coconut is a tropical fruit largely consumed in many countries. In some areas of the Brazilian coast, coconut shell represents more than 60% of the domestic waste volume. The coconut shell is composed mainly of lignin and cellulose, having a chemical composition very similar to wood and suitable for phenolic extraction. In this work, the use of ultrasound to extract phenolic compounds from coconut shell was evaluated. The effect of temperature, solution to solid ratio, pH and extraction time were evaluated through a 2(4) experimental planning. The extraction process was also optimized using surface response methodology. At the optimum operating condition (30 degrees C, solution to solid ratio of 50, 15 min of extraction and pH 6.5) the process yielded 22.44 mg of phenolic compounds per gram of coconut shell.
NASA Astrophysics Data System (ADS)
Kouloumentas, Christos
2011-09-01
The concept of the all-fiberized multi-wavelength regenerator is analyzed, and the design methodology for operation at 40 Gb/s is presented. The specific methodology has been applied in the past for the experimental proof-of-principle of the technique, but it has never been reported in detail. The regenerator is based on a strong dispersion map that is implemented using alternating dispersion compensating fibers (DCF) and single-mode fibers (SMF), and minimizes the nonlinear interaction between the wavelength-division multiplexing (WDM) channels. The optimized regenerator design with + 0.86 ps/nm/km average dispersion of the nonlinear fiber section is further investigated. The specific design is capable of simultaneously processing five WDM channels with 800 GHz channel spacing and providing Q-factor improvement higher than 1 dB for each channel. The cascadeability of the regenerator is also indicated using a 6-node metropolitan network simulation model.
Advanced applications of numerical modelling techniques for clay extruder design
NASA Astrophysics Data System (ADS)
Kandasamy, Saravanakumar
Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from experimental trials on a scaled extruder which seemed to be in reasonable agreement with the former. The modelling methodology was then extended to full-scale industrial extruders. The technical and commercialsuitability of using light weight materials to manufacture extruder components was also investigated. The stress and deformation induced on the components, due to extrusion pressure, was analysed using FEA and suitable alternative materials were identified. A cost comparison was then made for different extruder materials. The results show potential of significant technical and commercial benefits to the ceramic industry.
Alternative mRNA polyadenylation in eukaryotes: an effective regulator of gene expression
Lutz, Carol S.; Moreira, Alexandra
2010-01-01
Alternative RNA processing mechanisms, including alternative splicing and alternative polyadenylation, are increasingly recognized as important regulators of gene expression. This article will focus on what has recently been described about alternative polyadenylation in development, differentiation, and disease in higher eukaryotes. We will also describe how the evolving global methodologies for examining the cellular transcriptome, both experimental and bioinformatic, are revealing new details about the complex nature of alternative 3′ end formation, as well as interactions with other RNA-mediated and RNA processing mechanisms. PMID:21278855
Nutrient Stress Detection in Corn Using Neural Networks and AVIRIS Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Estep, Lee
2001-01-01
AVIRIS image cube data has been processed for the detection of nutrient stress in corn by both known, ratio-type algorithms and by trained neural networks. The USDA Shelton, NE, ARS Variable Rate Nitrogen Application (VRAT) experimental farm was the site used in the study. Upon application of ANOVA and Dunnett multiple comparsion tests on the outcome of both the neural network processing and the ratio-type algorithm results, it was found that the neural network methodology provides a better overall capability to separate nutrient stressed crops from in-field controls.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Sairi, Nor Asrina
2016-10-01
While the oil palm industry has been recognized for its contribution towards economic growth and rapid development, it has also contributed to environmental pollution due to the production of huge quantities of by-products from the oil extraction process. A phytoremediation technique (floating Vetiver system) was used to treat Palm Oil Mill Secondary Effluent (POMSE). A batch study using 40 L treatment tanks was carried out under different conditions and Response Surface Methodology (RSM) was applied to optimize the treatment process. A three factor central composite design (CCD) was used to predict the experimental variables (POMSE concentration, Vetiver plant density and time). An extraordinary decrease in organic matter as measured by BOD and COD (96% and 94% respectively) was recorded during the experimental duration of 4 weeks using a density of 30 Vetiver plants. The best and lowest final BOD of 2 mg/L was obtained when using 15 Vetiver plants after 13 days for low concentration POMSE (initial BOD = 50 mg/L). The next best result of BOD at 32 mg/L was obtained when using 30 Vetiver plants after 24 days for medium concentration POMSE (initial BOD = 175 mg/L). These results confirmed the validity of the model, and the experimental value was determined to be quite close to the predicted value, implying that the empirical model derived from RSM experimental design can be used to adequately describe the relationship between the independent variables and response. The study showed that the Vetiver system is an effective method of treating POMSE. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, K. J.; Capson, D. D.
2004-03-29
Argonne National Laboratory (ANL) has developed a process to immobilize waste salt containing fission products, uranium, and transuranic elements as chlorides in a glass-bonded ceramic waste form. This salt was generated in the electrorefining operation used in the electrometallurgical treatment of spent Experimental Breeder Reactor-II (EBR-II) fuel. The ceramic waste process culminates with an elevated temperature operation. The processing conditions used by the furnace, for demonstration scale and production scale operations, are to be developed at Argonne National Laboratory-West (ANL-West). To assist in selecting the processing conditions of the furnace and to reduce the number of costly experiments, a finitemore » difference model was developed to predict the consolidation of the ceramic waste. The model accurately predicted the heating as well as the bulk density of the ceramic waste form. The methodology used to develop the computer model and a comparison of the analysis to experimental data is presented.« less
NASA Technical Reports Server (NTRS)
Powell, W. B.
1973-01-01
Thrust chamber performance is evaluated in terms of an analytical model incorporating all the loss processes that occur in a real rocket motor. The important loss processes in the real thrust chamber were identified, and a methodology and recommended procedure for predicting real thrust chamber vacuum specific impulse were developed. Simplified equations for the calculation of vacuum specific impulse are developed to relate the delivered performance (both vacuum specific impulse and characteristic velocity) to the ideal performance as degraded by the losses corresponding to a specified list of loss processes. These simplified equations enable the various performance loss components, and the corresponding efficiencies, to be quantified separately (except that interaction effects are arbitrarily assigned in the process). The loss and efficiency expressions presented can be used to evaluate experimentally measured thrust chamber performance, to direct development effort into the areas most likely to yield improvements in performance, and as a basis to predict performance of related thrust chamber configurations.
Optimization of electrocoagulation process for the treatment of landfill leachate
NASA Astrophysics Data System (ADS)
Huda, N.; Raman, A. A.; Ramesh, S.
2017-06-01
The main problem of landfill leachate is its diverse composition comprising of persistent organic pollutants (POPs) which must be removed before being discharge into the environment. In this study, the treatment of leachate using electrocoagulation (EC) was investigated. Iron was used as both the anode and cathode. Response surface methodology was used for experimental design and to study the effects of operational parameters. Central Composite Design was used to study the effects of initial pH, inter-electrode distance, and electrolyte concentration on color, and COD removals. The process could remove up to 84 % color and 49.5 % COD. The experimental data was fitted onto second order polynomial equations. All three factors were found to be significantly affect the color removal. On the other hand, electrolyte concentration was the most significant parameter affecting the COD removal. Numerical optimization was conducted to obtain the optimum process performance. Further work will be conducted towards integrating EC with other wastewater treatment processes such as electro-Fenton.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
The BCD of response time analysis in experimental economics.
Spiliopoulos, Leonidas; Ortmann, Andreas
2018-01-01
For decisions in the wild, time is of the essence. Available decision time is often cut short through natural or artificial constraints, or is impinged upon by the opportunity cost of time. Experimental economists have only recently begun to conduct experiments with time constraints and to analyze response time (RT) data, in contrast to experimental psychologists. RT analysis has proven valuable for the identification of individual and strategic decision processes including identification of social preferences in the latter case, model comparison/selection, and the investigation of heuristics that combine speed and performance by exploiting environmental regularities. Here we focus on the benefits, challenges, and desiderata of RT analysis in strategic decision making. We argue that unlocking the potential of RT analysis requires the adoption of process-based models instead of outcome-based models, and discuss how RT in the wild can be captured by time-constrained experiments in the lab. We conclude that RT analysis holds considerable potential for experimental economics, deserves greater attention as a methodological tool, and promises important insights on strategic decision making in naturally occurring environments.
The speed-accuracy tradeoff: history, physiology, methodology, and behavior
Heitz, Richard P.
2014-01-01
There are few behavioral effects as ubiquitous as the speed-accuracy tradeoff (SAT). From insects to rodents to primates, the tendency for decision speed to covary with decision accuracy seems an inescapable property of choice behavior. Recently, the SAT has received renewed interest, as neuroscience approaches begin to uncover its neural underpinnings and computational models are compelled to incorporate it as a necessary benchmark. The present work provides a comprehensive overview of SAT. First, I trace its history as a tractable behavioral phenomenon and the role it has played in shaping mathematical descriptions of the decision process. Second, I present a “users guide” of SAT methodology, including a critical review of common experimental manipulations and analysis techniques and a treatment of the typical behavioral patterns that emerge when SAT is manipulated directly. Finally, I review applications of this methodology in several domains. PMID:24966810
GT-CATS: Tracking Operator Activities in Complex Systems
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Mitchell, Christine M.; Palmer, Everett A.
1999-01-01
Human operators of complex dynamic systems can experience difficulties supervising advanced control automation. One remedy is to develop intelligent aiding systems that can provide operators with context-sensitive advice and reminders. The research reported herein proposes, implements, and evaluates a methodology for activity tracking, a form of intent inferencing that can supply the knowledge required for an intelligent aid by constructing and maintaining a representation of operator activities in real time. The methodology was implemented in the Georgia Tech Crew Activity Tracking System (GT-CATS), which predicts and interprets the actions performed by Boeing 757/767 pilots navigating using autopilot flight modes. This report first describes research on intent inferencing and complex modes of automation. It then provides a detailed description of the GT-CATS methodology, knowledge structures, and processing scheme. The results of an experimental evaluation using airline pilots are given. The results show that GT-CATS was effective in predicting and interpreting pilot actions in real time.
Cordeiro, Liliana; Valente, Inês M; Santos, João Rodrigo; Rodrigues, José A
2018-05-01
In this work, an analytical methodology for volatile carbonyl compounds characterization in green and roasted coffee beans was developed. The methodology relied on a recent and simple sample preparation technique, gas diffusion microextraction for extraction of the samples' volatiles, followed HPLC-DAD-MS/MS analysis. The experimental conditions in terms of extraction temperature and extraction time were studied. A profile for carbonyl compounds was obtained for both arabica and robusta coffee species (green and roasted samples). Twenty-seven carbonyl compounds were identified and further discussed, in light of reported literature, with different coffee characteristics: coffee ageing, organoleptic impact, presence of defective beans, authenticity, human's health implication, post-harvest coffee processing and roasting. The applied methodology showed to be a powerful analytical tool to be used for coffee characterization as it measures marker compounds of different coffee characteristics. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shpotyuk, Ya; Cebulski, J.; Ingram, A.; Shpotyuk, O.
2017-12-01
Methodological possibilities of positron annihilation lifetime (PAL) spectroscopy in application to nanostructurized substances treated within three-term fitting procedure are reconsidered to parameterize their atomic-deficient structural arrangement. In contrast to conventional three-term fitting analysis of the detected PAL spectra based on admixed positron trapping and positronium (Ps) decaying, the nanostructurization due to guest nanoparticles embedded in host matrix is considered as producing modified trapping, which involves conversion between these channels. The developed approach referred to as x3-x2-coupling decomposition algorithm allows estimation free volumes of interfacial voids responsible for positron trapping and bulk lifetimes in nanoparticle-embedded substances. This methodology is validated using experimental data of Chakraverty et al. [Phys. Rev. B71 (2005) 024115] on PAL study of composites formed by guest NiFe2O4 nanocrystals grown in host SiO2 matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya
Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio,more » barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated that WS is mainly governed by barrel temperature and feed moisture content, which might have resulted in formation of starch-protein complexes due to denaturation of protein and gelatinization of starch. Screw speed coupled with temperature and feed moisture content controlled the ER and TD values. Higher screw speeds might have reduced the viscosity of the feed dough resulting in higher TD and lower ER values. Based on RSM and GA analysis screw speed, barrel temperature and feed moisture content were the interacting process variables influencing maximum WS followed by ER and TD.« less
Factors in Human-Computer Interface Design (A Pilot Study).
1994-12-01
This study used a pretest - posttest control group experimental design to test the effect of consistency on speed, retention, and user satisfaction. Four...analysis. The overall methodology was a pretest - posttest control group experimental design using different prototypes to test the effects of...methodology used for this study was a pretest - posttest control group experimental design using different prototypes to test for features of the human
Ghasemzadeh, Ali; Jaafar, Hawa Z E; Rahmat, Asmah
2015-07-30
Analysis and extraction of plant matrices are important processes for the development, modernization, and quality control of herbal formulations. Response surface methodology is a collection of statistical and mathematical techniques that are used to optimize the range of variables in various experimental processes to reduce the number of experimental runs, cost , and time, compared to other methods. Response surface methodology was applied for optimizing reflux extraction conditions for achieving high 6-gingerol and 6-shogaol contents, and high antioxidant activity in Zingiber officinale var. rubrum Theilade . The two-factor central composite design was employed to determine the effects of two independent variables, namely extraction temperature (X1: 50-80 °C) and time (X2: 2-4 h), on the properties of the extracts. The 6-gingerol and 6-shogaol contents were measured using ultra-performance liquid chromatography. The antioxidant activity of the rhizome extracts was determined by means of the 1,1-diphenyl-2-picrylhydrazyl assay. Anticancer activity of optimized extracts against HeLa cancer cell lines was measured using MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay. Increasing the extraction temperature and time induced significant response of the variables. The optimum extraction condition for all responses was at 76.9 °C for 3.4 h. Under the optimum condition, the corresponding predicted response values for 6-gingerol, 6-shogaol, and the antioxidant activity were 2.89 mg/g DW, 1.85 mg/g DW, and 84.3%, respectively. 6-gingerol and 6-shogaol were extracted under optimized condition to check the viability of the models. The values were 2.92 and 1.88 mg/g DW, and 84.0% for 6-gingerol, 6-shogaol, and the antioxidant activity respectively. The experimental values agreed with those predicted, thus indicating suitability of the models employed and the success of RSM in optimizing the extraction condition. With optimizing of reflux extraction anticancer activity of extracts against HeLa cancer cells enhanced about 16.8%. The half inhibition concentration (IC50) value of optimized and unoptimized extract was found at concentration of 20.9 and 38.4 μg/mL respectively. Optimized extract showed more distinct anticancer activities against HeLa cancer cells in a concentration of 40 μg/mL (P < 0.01) without toxicity to normal cells. The results indicated that the pharmaceutical quality of ginger could be improved significantly by optimizing of extraction process using response surface methodology.
2017-04-30
practices in latent variable theory, it is not surprising that effective measurement programs present methodological typing and considering of experimental ...7 3.3 Methodology ...8 Revised Enterprise Modeling Methodology ................................................................ 128 9 Conclusions
El-Naggar, Noura El-Ahmady; Moawad, Hassan; El-Shweihy, Nancy M; El-Ewasy, Sara M
2015-01-01
Among the antitumor drugs, bacterial enzyme L-asparaginase has been employed as the most effective chemotherapeutic agent in pediatric oncotherapy especially for acute lymphoblastic leukemia. Glutaminase free L-asparaginase producing actinomycetes were isolated from soil samples collected from Egypt. Among them, a potential culture, strain NEAE-119, was selected and identified on the basis of morphological, cultural, physiological, and biochemical properties together with 16S rRNA sequence as Streptomyces olivaceus NEAE-119 and sequencing product (1509 bp) was deposited in the GenBank database under accession number KJ200342. The optimization of different process parameters for L-asparaginase production by Streptomyces olivaceus NEAE-119 using Plackett-Burman experimental design and response surface methodology was carried out. Fifteen variables (temperature, pH, incubation time, inoculum size, inoculum age, agitation speed, dextrose, starch, L-asparagine, KNO3, yeast extract, K2HPO4, MgSO4·7H2O, NaCl, and FeSO4·7H2O) were screened using Plackett-Burman experimental design. The most positive significant independent variables affecting enzyme production (temperature, inoculum age, and agitation speed) were further optimized by the face-centered central composite design-response surface methodology.
El-Naggar, Noura El-Ahmady; Moawad, Hassan; El-Shweihy, Nancy M.; El-Ewasy, Sara M.
2015-01-01
Among the antitumor drugs, bacterial enzyme L-asparaginase has been employed as the most effective chemotherapeutic agent in pediatric oncotherapy especially for acute lymphoblastic leukemia. Glutaminase free L-asparaginase producing actinomycetes were isolated from soil samples collected from Egypt. Among them, a potential culture, strain NEAE-119, was selected and identified on the basis of morphological, cultural, physiological, and biochemical properties together with 16S rRNA sequence as Streptomyces olivaceus NEAE-119 and sequencing product (1509 bp) was deposited in the GenBank database under accession number KJ200342. The optimization of different process parameters for L-asparaginase production by Streptomyces olivaceus NEAE-119 using Plackett-Burman experimental design and response surface methodology was carried out. Fifteen variables (temperature, pH, incubation time, inoculum size, inoculum age, agitation speed, dextrose, starch, L-asparagine, KNO3, yeast extract, K2HPO4, MgSO4·7H2O, NaCl, and FeSO4·7H2O) were screened using Plackett-Burman experimental design. The most positive significant independent variables affecting enzyme production (temperature, inoculum age, and agitation speed) were further optimized by the face-centered central composite design-response surface methodology. PMID:26180806
NASA Astrophysics Data System (ADS)
Ahmad, Mohd Azmier; Afandi, Nur Syahidah; Bello, Olugbenga Solomon
2017-05-01
This study investigates the adsorptive removal of malachite green (MG) dye from aqueous solutions using chemically modified lime-peel-based activated carbon (LPAC). The adsorbent prepared was characterized using FTIR, SEM, Proximate analysis and BET techniques, respectively. Central composite design (CCD) in response surface methodology (RSM) was used to optimize the adsorption process. The effects of three variables: activation temperature, activation time and chemical impregnation ratio (IR) using KOH and their effects on percentage of dye removal and LPAC yield were investigated. Based on CCD design, quadratic models and two factor interactions (2FI) were developed correlating the adsorption variables to the two responses. Analysis of variance (ANOVA) was used to judge the adequacy of the model. The optimum conditions of MG dye removal using LPAC are: activation temperature (796 °C), activation time (1.0 h) and impregnation ratio (2.6), respectively. The percentage of MG dye removal obtained was 94.68 % resulting in 17.88 % LPAC yield. The percentage of error between predicted and experimental results for the removal of MG dye is 0.4 %. Model prediction was in good agreement with experimental results and LPAC was found to be effective in removing MG dye from aqueous solution.
Zhou, Shaoqi; Feng, Xinbin
2017-01-01
In this paper, a statistically-based experimental design with response surface methodology (RSM) was employed to examine the effects of functional conditions on the photoelectrocatalytic oxidation of landfill leachate using a Cu/N co-doped TiO2 (Ti) electrode. The experimental design method was applied to response surface modeling and the optimization of the operational parameters of the photoelectro-catalytic degradation of landfill leachate using TiO2 as a photo-anode. The variables considered were the initial chemical oxygen demand (COD) concentration, pH and the potential bias. Two dependent parameters were either directly measured or calculated as responses: chemical oxygen demand (COD) removal and total organic carbon (TOC) removal. The results of this investigation reveal that the optimum conditions are an initial pH of 10.0, 4377.98mgL-1 initial COD concentration and 25.0 V of potential bias. The model predictions and the test data were in satisfactory agreement. COD and TOC removals of 67% and 82.5%, respectively, were demonstrated. Under the optimal conditions, GC/MS showed 73 organic micro-pollutants in the raw landfill leachate which included hydrocarbons, aromatic compounds and esters. After the landfill leachate treatment processes, 38 organic micro-pollutants disappeared completely in the photoelectrocatalytic process. PMID:28671943
Health monitoring of offshore structures using wireless sensor network: experimental investigations
NASA Astrophysics Data System (ADS)
Chandrasekaran, Srinivasan; Chitambaram, Thailammai
2016-04-01
This paper presents a detailed methodology of deploying wireless sensor network in offshore structures for structural health monitoring (SHM). Traditional SHM is carried out by visual inspections and wired systems, which are complicated and requires larger installation space to deploy while decommissioning is a tedious process. Wireless sensor networks can enhance the art of health monitoring with deployment of scalable and dense sensor network, which consumes lesser space and lower power consumption. Proposed methodology is mainly focused to determine the status of serviceability of large floating platforms under environmental loads using wireless sensors. Data acquired by the servers will analyze the data for their exceedance with respect to the threshold values. On failure, SHM architecture will trigger an alarm or an early warning in the form of alert messages to alert the engineer-in-charge on board; emergency response plans can then be subsequently activated, which shall minimize the risk involved apart from mitigating economic losses occurring from the accidents. In the present study, wired and wireless sensors are installed in the experimental model and the structural response, acquired is compared. The wireless system comprises of Raspberry pi board, which is programmed to transmit the acquired data to the server using Wi-Fi adapter. Data is then hosted in the webpage for further post-processing, as desired.
Krause, Mark A
2015-07-01
Inquiry into evolutionary adaptations has flourished since the modern synthesis of evolutionary biology. Comparative methods, genetic techniques, and various experimental and modeling approaches are used to test adaptive hypotheses. In psychology, the concept of adaptation is broadly applied and is central to comparative psychology and cognition. The concept of an adaptive specialization of learning is a proposed account for exceptions to general learning processes, as seen in studies of Pavlovian conditioning of taste aversions, sexual responses, and fear. The evidence generally consists of selective associations forming between biologically relevant conditioned and unconditioned stimuli, with conditioned responses differing in magnitude, persistence, or other measures relative to non-biologically relevant stimuli. Selective associations for biologically relevant stimuli may suggest adaptive specializations of learning, but do not necessarily confirm adaptive hypotheses as conceived of in evolutionary biology. Exceptions to general learning processes do not necessarily default to an adaptive specialization explanation, even if experimental results "make biological sense". This paper examines the degree to which hypotheses of adaptive specializations of learning in sexual and fear response systems have been tested using methodologies developed in evolutionary biology (e.g., comparative methods, quantitative and molecular genetics, survival experiments). A broader aim is to offer perspectives from evolutionary biology for testing adaptive hypotheses in psychological science.
Watson, Malcolm Alexander; Tubić, Aleksandra; Agbaba, Jasmina; Nikić, Jasmina; Maletić, Snežana; Molnar Jazić, Jelena; Dalmacija, Božo
2016-07-15
Interactions between arsenic and natural organic matter (NOM) are key limiting factors during the optimisation of drinking water treatment when significant amounts of both must be removed. This work uses Response Surface Methodology (RSM) to investigate how they interact during their simultaneous removal by iron chloride coagulation, using humic acid (HA) as a model NOM substance. Using a three factor Box-Behnken experimental design, As and HA removals were modelled, as well as a combined removal response. ANOVA results showed the significance of the coagulant dose for all three responses. At high initial arsenic concentrations (200μg/l), As removal was significantly hindered by the presence of HA. In contrast, the HA removal response was found to be largely independent of the initial As concentration, with the optimum coagulant dose increasing at increasing HA concentrations. The combined response was similar to the HA removal response, and the interactions evident are most interesting in terms of optimising treatment processes during the preparation of drinking water, highlighting the importance of utilizing RSM for such investigations. The combined response model was successfully validated with two different groundwaters used for drinking water supply in the Republic of Serbia, showing excellent agreement under similar experimental conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vieceli, Nathália; Nogueira, Carlos A.; Pereira, Manuel F. C.; Durão, Fernando O.; Guimarães, Carlos; Margarido, Fernanda
2018-01-01
The recovery of lithium from hard rock minerals has received increased attention given the high demand for this element. Therefore, this study optimized an innovative process, which does not require a high-temperature calcination step, for lithium extraction from lepidolite. Mechanical activation and acid digestion were suggested as crucial process parameters, and experimental design and response-surface methodology were applied to model and optimize the proposed lithium extraction process. The promoting effect of amorphization and the formation of lithium sulfate hydrate on lithium extraction yield were assessed. Several factor combinations led to extraction yields that exceeded 90%, indicating that the proposed process is an effective approach for lithium recovery.
Managing cognitive impairment in the elderly: conceptual, intervention and methodological issues.
Buckwalter, K C; Stolley, J M; Farran, C J
1999-11-11
With the aging of society, the incidence of dementia in the elderly is also increasing, and thus results in increased numbers of individuals with cognitive impairment. Nurses and other researchers have investigated issues concerning the management of cognitive impairment. This article highlights conceptual, intervention and methodological issues associated with this phenomenon. Cognitive change is a multivariate construct that includes alterations in a variety of information processing mechanisms such as problem solving ability, memory, perception, attention and learning, and judgement. Although there is a large body of research, conceptual, intervention and methodological issues remain. Much of the clinical research on cognitive impairment is atheoretical, with this issue only recently being addressed. While many clinical interventions have been proposed, few have been adequately tested. There are also various methodological concerns, such as small sample sizes and limited statistical power; study design issues (experimental vs. non-experimental), and internal and external validity problems. Clearly, additional research designed to intervene with these difficult behaviors is needed. A variety of psychosocial, environmental and physical parameters must be considered in the nursing care of persons with cognitive impairment. Special attention has been given to interventions associated with disruptive behaviors. Interventions are complex and knowledge must be integrated from both the biomedical and behavioral sciences in order to deal effectively with the numerous problems that can arise over a long and changing clinical course. Some researchers and clinicians have suggested that a new culture regarding dementia care is needed, one that focuses on changing attitudes and beliefs about persons with dementia and one that changes how organizations deliver that care. This review identifies key conceptual, intervention and methodological issues and recommends how these issues might be addressed in the future.
Quasi-experimental study designs series-paper 9: collecting data from quasi-experimental studies.
Aloe, Ariel M; Becker, Betsy Jane; Duvendack, Maren; Valentine, Jeffrey C; Shemilt, Ian; Waddington, Hugh
2017-09-01
To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. All quasi-experimental (QE) designs. When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Khan, Mohammad Jakir Hossain; Hussain, Mohd Azlan; Mujtaba, Iqbal Mohammed
2014-01-01
Propylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75°C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR). PMID:28788576
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R
2014-03-25
In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L(-1) SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g(-)(1)). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models. Copyright © 2013. Published by Elsevier B.V.
Participative Budgeting as a Communication Process: A Model and Experiment.
1978-01-01
Control Group Design Methodology Setting Condition Measurements Summary :, ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -a...criticized the experimental design at length , particularly the experimental variat ion of the control dimension. In his view , Ti the in terpreta t ion of...Thus, th is approach is adopted to avoid the poss ihi lir- -- of th e testing effect. The P o s t — T e s t Cr1 -- , Control Group Design -\\mong the
NASA Astrophysics Data System (ADS)
Yamakawa, Takeshi; Maruyama, Akihiro; Uedan, Hirohisa; Iino, Takanori; Hosokawa, Yoichiroh
2015-03-01
A new methodology to estimate the dynamics of femtosecond laser-induced impulsive force generated into water under microscope was developed. In this method, the position shift of the bead in water before and after the femtosecond laser irradiation was investigated experimentally and compared with motion equation assuming stress wave propagation with expansion and collapse the cavitation bubble. In the process of the comparison, parameters of force and time of the stress wave were determined. From these results, dynamics of propagations of shock and stress waves, cavitation bubble generation, and these actions to micro-objects were speculated.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes
NASA Astrophysics Data System (ADS)
Torabi, Amir; Kolahan, Farhad
2018-07-01
Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
ERIC Educational Resources Information Center
Smith, Justin D.
2012-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…
Casaseca-de-la-Higuera, Pablo; Simmross-Wattenberg, Federico; Martín-Fernández, Marcos; Alberola-López, Carlos
2009-07-01
Discontinuation of mechanical ventilation is a challenging task that involves a number of subtle clinical issues. The gradual removal of the respiratory support (referred to as weaning) should be performed as soon as autonomous respiration can be sustained. However, the prediction rate of successful extubation is still below 25% based on previous studies. Construction of an automatic system that provides information on extubation readiness is thus desirable. Recent works have demonstrated that the breathing pattern variability is a useful extubation readiness indicator, with improving performance when multiple respiratory signals are jointly processed. However, the existing methods for predictor extraction present several drawbacks when length-limited time series are to be processed in heterogeneous groups of patients. In this paper, we propose a model-based methodology for automatic readiness prediction. It is intended to deal with multichannel, nonstationary, short records of the breathing pattern. Results on experimental data yield an 87.27% of successful readiness prediction, which is in line with the best figures reported in the literature. A comparative analysis shows that our methodology overcomes the shortcomings of so far proposed methods when applied to length-limited records on heterogeneous groups of patients.
Tsai, Kuo-Ming; Wang, He-Yi
2014-08-20
This study focuses on injection molding process window determination for obtaining optimal imaging optical properties, astigmatism, coma, and spherical aberration using plastic lenses. The Taguchi experimental method was first used to identify the optimized combination of parameters and significant factors affecting the imaging optical properties of the lens. Full factorial experiments were then implemented based on the significant factors to build the response surface models. The injection molding process windows for lenses with optimized optical properties were determined based on the surface models, and confirmation experiments were performed to verify their validity. The results indicated that the significant factors affecting the optical properties of lenses are mold temperature, melt temperature, and cooling time. According to experimental data for the significant factors, the oblique ovals for different optical properties on the injection molding process windows based on melt temperature and cooling time can be obtained using the curve fitting approach. The confirmation experiments revealed that the average errors for astigmatism, coma, and spherical aberration are 3.44%, 5.62%, and 5.69%, respectively. The results indicated that the process windows proposed are highly reliable.
Experimental investigation of the structural behavior of equine urethra.
Natali, Arturo Nicola; Carniel, Emanuele Luigi; Frigo, Alessandro; Fontanella, Chiara Giulia; Rubini, Alessandro; Avital, Yochai; De Benedictis, Giulia Maria
2017-04-01
An integrated experimental and computational investigation was developed aiming to provide a methodology for characterizing the structural response of the urethral duct. The investigation provides information that are suitable for the actual comprehension of lower urinary tract mechanical functionality and the optimal design of prosthetic devices. Experimental activity entailed the execution of inflation tests performed on segments of horse penile urethras from both proximal and distal regions. Inflation tests were developed imposing different volumes. Each test was performed according to a two-step procedure. The tubular segment was inflated almost instantaneously during the first step, while volume was held constant for about 300s to allow the development of relaxation processes during the second step. Tests performed on the same specimen were interspersed by 600s of rest to allow the recovery of the specimen mechanical condition. Results from experimental activities were statistically analyzed and processed by means of a specific mechanical model. Such computational model was developed with the purpose of interpreting the general pressure-volume-time response of biologic tubular structures. The model includes parameters that interpret the elastic and viscous behavior of hollow structures, directly correlated with the results from the experimental activities. Post-processing of experimental data provided information about the non-linear elastic and time-dependent behavior of the urethral duct. In detail, statistically representative pressure-volume and pressure relaxation curves were identified, and summarized by structural parameters. Considering elastic properties, initial stiffness ranged between 0.677 ± 0.026kPa and 0.262 ± 0.006kPa moving from proximal to distal region of penile urethra. Viscous parameters showed typical values of soft biological tissues, as τ 1 =0.153±0.018s, τ 2 =17.458 ± 1.644s and τ 1 =0.201 ± 0.085, τ 2 = 8.514 ± 1.379s for proximal and distal regions respectively. A general procedure for the mechanical characterization of the urethral duct has been provided. The proposed methodology allows identifying mechanical parameters that properly express the mechanical behavior of the biological tube. The approach is especially suitable for evaluating the influence of degenerative phenomena on the lower urinary tract mechanical functionality. The information are mandatory for the optimal design of potential surgical procedures and devices. Copyright © 2017 Elsevier B.V. All rights reserved.
Mission demonstration concept for the long-duration storage and transfer of cryogenic propellants
NASA Astrophysics Data System (ADS)
McLean, C.; Deininger, W.; Ingram, K.; Schweickart, R.; Unruh, B.
This paper describes an experimental platform that will demonstrate the major technologies required for the handling and storage of cryogenic propellants in a low-to-zero-g environment. In order to develop a cost-effective, high value-added demonstration mission, a review of the complete mission concept of operations (CONOPS) was performed. The overall cost of such a mission is driven not only by the spacecraft platform and on-orbit experiments themselves, but also by the complexities of handling cryogenic propellants during ground-processing operations. On-orbit storage methodologies were looked at for both passive and active systems. Passive systems rely purely on isolation of the stored propellant from environmental thermal loads, while active cooling employs cryocooler technologies. The benefit trade between active and passive systems is mission-dependent due to the mass, power, and system-level penalties associated with active cooling systems. The experimental platform described in this paper is capable of demonstrating multiple advanced micro-g cryogenic propellant management technologies. In addition to the requirements of demonstrating these technologies, the methodology of propellant transfer must be evaluated. The handling of multiphase liquids in micro-g is discussed using flight-heritage micro-g propellant management device technologies as well as accelerated tank stratification for access to vapor-free or liquid-free propellants. The mission concept presented shows the extensibility of the experimental platform to demonstrate advanced cryogenic components and technologies, propellant transfer methodologies, as well as the validation of thermal and fluidic models, from subscale tankage to an operational architecture.
Periasamy, Rathinasamy; Palvannan, Thayumanavan
2010-12-01
Production of laccase using a submerged culture of Pleurotus orstreatus IMI 395545 was optimized by the Taguchi orthogonal array (OA) design of experiments (DOE) methodology. This approach facilitates the study of the interactions of a large number of variables spanned by factors and their settings, with a small number of experiments, leading to considerable savings in time and cost for process optimization. This methodology optimizes the number of impact factors and enables to calculate their interaction in the production of industrial enzymes. Eight factors, viz. glucose, yeast extract, malt extract, inoculum, mineral solution, inducer (1 mM CuSO₄) and amino acid (l-asparagine) at three levels and pH at two levels, with an OA layout of L18 (2¹ × 3⁷) were selected for the proposed experimental design. The laccase yield obtained from the 18 sets of fermentation experiments performed with the selected factors and levels was further processed with Qualitek-4 software. The optimized conditions shared an enhanced laccase expression of 86.8% (from 485.0 to 906.3 U). The combination of factors was further validated for laccase production and reactive blue 221 decolorization. The results revealed an enhanced laccase yield of 32.6% and dye decolorization up to 84.6%. This methodology allows the complete evaluation of main and interaction factors. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Safari, Mehdi
Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.
Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Mirsky, Vladimir M.
New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.
The aesthetics of laboratory inscription: Claude Bernard's Cahier Rouge.
Sattar, Atia
2013-03-01
This essay explores the aesthetic sensibilities of the French physiologist Claude Bernard (1813-1878). In particular, it analyzes the Cahier Rouge (1850-1860), Bernard's acclaimed laboratory notebook. In this notebook, Bernard articulates the range of his experience as an experimental physiologist, juxtaposing without differentiation details of laboratory procedure and more personal queries, doubts, and reflections on experimentation, life, and art. Bernard's insights, it is argued, offer an aesthetic and phenomenological template for considering experimentation. His physiological point of view ranges from his own bodily aesthesis or sensory perception, through personal reflections on scientific discovery as an artistic process, to a broader metaphysical conception of life as an artistic creation. Such an aesthetic approach to physiology enables Bernard to reconcile his empirical methodology and his romantic idealism; it offers the history of laboratory science a framework for considering the individual, bodily, and emotional labor inherent in physiological experimentation.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
Chen, Yen-Ju; Lee, Yen-I; Chang, Wen-Cheng; Hsiao, Po-Jen; You, Jr-Shian; Wang, Chun-Chieh; Wei, Chia-Min
2017-01-01
Abstract Hot deformation of Nd-Fe-B magnets has been studied for more than three decades. With a good combination of forming processing parameters, the remanence and (BH)max values of Nd-Fe-B magnets could be greatly increased due to the formation of anisotropic microstructures during hot deformation. In this work, a methodology is proposed for visualizing the material flow in hot-deformed Nd-Fe-B magnets via finite element simulation. Material flow in hot-deformed Nd-Fe-B magnets could be predicted by simulation, which fitted with experimental results. By utilizing this methodology, the correlation between strain distribution and magnetic properties enhancement could be better understood. PMID:28970869
Change Mechanisms of Schema-Centered Group Psychotherapy with Personality Disorder Patients
Tschacher, Wolfgang; Zorn, Peter; Ramseyer, Fabian
2012-01-01
Background This study addressed the temporal properties of personality disorders and their treatment by schema-centered group psychotherapy. It investigated the change mechanisms of psychotherapy using a novel method by which psychotherapy can be modeled explicitly in the temporal domain. Methodology and Findings 69 patients were assigned to a specific schema-centered behavioral group psychotherapy, 26 to social skills training as a control condition. The largest diagnostic subgroups were narcissistic and borderline personality disorder. Both treatments offered 30 group sessions of 100 min duration each, at a frequency of two sessions per week. Therapy process was described by components resulting from principal component analysis of patients' session-reports that were obtained after each session. These patient-assessed components were Clarification, Bond, Rejection, and Emotional Activation. The statistical approach focused on time-lagged associations of components using time-series panel analysis. This method provided a detailed quantitative representation of therapy process. It was found that Clarification played a core role in schema-centered psychotherapy, reducing rejection and regulating the emotion of patients. This was also a change mechanism linked to therapy outcome. Conclusions/Significance The introduced process-oriented methodology allowed to highlight the mechanisms by which psychotherapeutic treatment became effective. Additionally, process models depicted the actual patterns that differentiated specific diagnostic subgroups. Time-series analysis explores Granger causality, a non-experimental approximation of causality based on temporal sequences. This methodology, resting upon naturalistic data, can explicate mechanisms of action in psychotherapy research and illustrate the temporal patterns underlying personality disorders. PMID:22745811
Expert systems for superalloy studies
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Kaukler, William F.
1990-01-01
There are many areas in science and engineering which require knowledge of an extremely complex foundation of experimental results in order to design methodologies for developing new materials or products. Superalloys are an area which fit well into this discussion in the sense that they are complex combinations of elements which exhibit certain characteristics. Obviously the use of superalloys in high performance, high temperature systems such as the Space Shuttle Main Engine is of interest to NASA. The superalloy manufacturing process is complex and the implementation of an expert system within the design process requires some thought as to how and where it should be implemented. A major motivation is to develop a methodology to assist metallurgists in the design of superalloy materials using current expert systems technology. Hydrogen embrittlement is disasterous to rocket engines and the heuristics can be very complex. Attacking this problem as one module in the overall design process represents a significant step forward. In order to describe the objectives of the first phase implementation, the expert system was designated Hydrogen Environment Embrittlement Expert System (HEEES).
Space Shuttle Orbiter oxygen partial pressure sensing and control system improvements
NASA Technical Reports Server (NTRS)
Frampton, Robert F.; Hoy, Dennis M.; Kelly, Kevin J.; Walleshauser, James J.
1992-01-01
A program aimed at developing a new PPO2 oxygen sensor and a replacement amplifier for the Space Shuttle Orbiter is described. Experimental design methodologies used in the test and modeling process made it possible to enhance the effectiveness of the program and to reduce its cost. Significant cost savings are due to the increased lifetime of the basic sensor cell, the maximization of useful sensor life through an increased amplifier gain adjustment capability, the use of streamlined production processes for the manufacture of the assemblies, and the refurbishment capability of the replacement sensor.
Using Self-Experimentation and Single-Subject Methodology to Promote Critical Thinking
ERIC Educational Resources Information Center
Cowley, Brian J.; Lindgren, Ann; Langdon, David
2006-01-01
Critical thinking is often absent from classroom endeavor because it is hard to define (Gelder, 2005) or is difficult to assess (Bissell & Lemons, 2006). Critical thinking is defined as application, analysis, synthesis, and evaluation (Browne & Minnick, 2005). This paper shows how self-experimentation and single-subject methodology can be used to…
Modeling Collective Animal Behavior with a Cognitive Perspective: A Methodological Framework
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters. PMID:22761685
Modeling collective animal behavior with a cognitive perspective: a methodological framework.
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters.
Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J
2008-04-01
Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Integrated Experimental and Modelling Research for Non-Ferrous Smelting and Recycling Systems
NASA Astrophysics Data System (ADS)
Jak, Evgueni; Hidayat, Taufiq; Shishin, Denis; Mehrjardi, Ata Fallah; Chen, Jiang; Decterov, Sergei; Hayes, Peter
The chemistries of industrial pyrometallurgical non-ferrous smelting and recycling processes are becoming increasingly complex. Optimisation of process conditions, charge composition, temperature, oxygen partial pressure, and partitioning of minor elements between phases and different process streams require accurate description of phase equilibria and thermodynamics which are the focus of the present research. The experiments involve high temperature equilibration in controlled gas atmospheres, rapid quenching and direct measurement of equilibrium phase compositions with quantitative microanalytical techniques including electron probe X-ray microanalysis and Laser Ablation ICP-MS. The thermodynamic modelling is undertaken using computer package FactSage with the quasi-chemical model for the liquid slag phase and other advanced models. Experimental and modelling studies are combined into an integrated research program focused on the major elements Cu-Pb-Fe-O-Si-S system, slagging Al, Ca, Mg and other minor elements. The ongoing development of the research methodologies has resulted in significant advances in research capabilities. Examples of applications are given.
Propellant injection systems and processes
NASA Technical Reports Server (NTRS)
Ito, Jackson I.
1995-01-01
The previous 'Art of Injector Design' is maturing and merging with the more systematic 'Science of Combustion Device Analysis.' This technology can be based upon observation, correlation, experimentation and ultimately analytical modeling based upon basic engineering principles. This methodology is more systematic and far superior to the historical injector design process of 'Trial and Error' or blindly 'Copying Past Successes.' The benefit of such an approach is to be able to rank candidate design concepts for relative probability of success or technical risk in all the important combustion device design requirements and combustion process development risk categories before committing to an engine development program. Even if a single analytical design concept cannot be developed to predict satisfying all requirements simultaneously, a series of risk mitigation key enabling technologies can be identified for early resolution. Lower cost subscale or laboratory experimentation to demonstrate proof of principle, critical instrumentation requirements, and design discriminating test plans can be developed based on the physical insight provided by these analyses.
[Radiotherapy phase I trials' methodology: Features].
Rivoirard, R; Vallard, A; Langrand-Escure, J; Guy, J-B; Ben Mrad, M; Yaoxiong, X; Diao, P; Méry, B; Pigne, G; Rancoule, C; Magné, N
2016-12-01
In clinical research, biostatistical methods allow the rigorous analysis of data collection and should be defined from the trial design to obtain the appropriate experimental approach. Thus, if the main purpose of phase I is to determine the dose to use during phase II, methodology should be finely adjusted to experimental treatment(s). Today, the methodology for chemotherapy and targeted therapy is well known. For radiotherapy and chemoradiotherapy phase I trials, the primary endpoint must reflect both effectiveness and potential treatment toxicities. Methodology should probably be complex to limit failures in the following phases. However, there are very few data about methodology design in the literature. The present study focuses on these particular trials and their characteristics. It should help to raise existing methodological patterns shortcomings in order to propose new and better-suited designs. Copyright © 2016 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Using artificial neural networks to model aluminium based sheet forming processes and tools details
NASA Astrophysics Data System (ADS)
Mekras, N.
2017-09-01
In this paper, a methodology and a software system will be presented concerning the use of Artificial Neural Networks (ANNs) for modeling aluminium based sheet forming processes. ANNs models’ creation is based on the training of the ANNs using experimental, trial and historical data records of processes’ inputs and outputs. ANNs models are useful in cases that processes’ mathematical models are not accurate enough, are not well defined or are missing e.g. in cases of complex product shapes, new material alloys, new process requirements, micro-scale products, etc. Usually, after the design and modeling of the forming tools (die, punch, etc.) and before mass production, a set of trials takes place at the shop floor for finalizing processes and tools details concerning e.g. tools’ minimum radii, die/punch clearance, press speed, process temperature, etc. and in relation with the material type, the sheet thickness and the quality achieved from the trials. Using data from the shop floor trials and forming theory data, ANNs models can be trained and created, and can be used to estimate processes and tools final details, hence supporting efficient set-up of processes and tools before mass production starts. The proposed ANNs methodology and the respective software system are implemented within the EU H2020 project LoCoMaTech for the aluminium-based sheet forming process HFQ (solution Heat treatment, cold die Forming and Quenching).
NASA Astrophysics Data System (ADS)
Yin, Shaohua; Lin, Guo; Li, Shiwei; Peng, Jinhui; Zhang, Libo
2016-09-01
Microwave heating has been applied in the field of drying rare earth carbonates to improve drying efficiency and reduce energy consumption. The effects of power density, material thickness and drying time on the weight reduction (WR) are studied using response surface methodology (RSM). The results show that RSM is feasible to describe the relationship between the independent variables and weight reduction. Based on the analysis of variance (ANOVA), the model is in accordance with the experimental data. The optimum experiment conditions are power density 6 w/g, material thickness 15 mm and drying time 15 min, resulting in an experimental weight reduction of 73%. Comparative experiments show that microwave drying has the advantages of rapid dehydration and energy conservation. Particle analysis shows that the size distribution of rare earth carbonates after microwave drying is more even than those in an oven. Based on these findings, microwave heating technology has an important meaning to energy-saving and improvement of production efficiency for rare earth smelting enterprises and is a green heating process.
Quality control methodology for high-throughput protein-protein interaction screening.
Vazquez, Alexei; Rual, Jean-François; Venkatesan, Kavitha
2011-01-01
Protein-protein interactions are key to many aspects of the cell, including its cytoskeletal structure, the signaling processes in which it is involved, or its metabolism. Failure to form protein complexes or signaling cascades may sometimes translate into pathologic conditions such as cancer or neurodegenerative diseases. The set of all protein interactions between the proteins encoded by an organism constitutes its protein interaction network, representing a scaffold for biological function. Knowing the protein interaction network of an organism, combined with other sources of biological information, can unravel fundamental biological circuits and may help better understand the molecular basics of human diseases. The protein interaction network of an organism can be mapped by combining data obtained from both low-throughput screens, i.e., "one gene at a time" experiments and high-throughput screens, i.e., screens designed to interrogate large sets of proteins at once. In either case, quality controls are required to deal with the inherent imperfect nature of experimental assays. In this chapter, we discuss experimental and statistical methodologies to quantify error rates in high-throughput protein-protein interactions screens.
Puri, Munish; Kaur, Aneet; Singh, Ram Sarup; Singh, Anubhav
2010-09-01
Response surface methodology was used to optimize the fermentation medium for enhancing naringinase production by Staphylococcus xylosus. The first step of this process involved the individual adjustment and optimization of various medium components at shake flask level. Sources of carbon (sucrose) and nitrogen (sodium nitrate), as well as an inducer (naringin) and pH levels were all found to be the important factors significantly affecting naringinase production. In the second step, a 22 full factorial central composite design was applied to determine the optimal levels of each of the significant variables. A second-order polynomial was derived by multiple regression analysis on the experimental data. Using this methodology, the optimum values for the critical components were obtained as follows: sucrose, 10.0%; sodium nitrate, 10.0%; pH 5.6; biomass concentration, 1.58%; and naringin, 0.50% (w/v), respectively. Under optimal conditions, the experimental naringinase production was 8.45 U/mL. The determination coefficients (R(2)) were 0.9908 and 0.9950 for naringinase activity and biomass production, respectively, indicating an adequate degree of reliability in the model.
NASA Astrophysics Data System (ADS)
Moreira, I. S.; Fernandes, P. A.; Ramos, M. J.
The definition and comprehension of the hot spots in an interface is a subject of primary interest for a variety of fields, including structure-based drug design. Therefore, to achieve an alanine mutagenesis computational approach that is at the same time accurate and predictive, capable of reproducing the experimental mutagenesis values is a major challenge in the computational biochemistry field. Antibody/protein antigen complexes provide one of the greatest models to study protein-protein recognition process because they have three fundamentally features: specificity, high complementary association and a small epitope restricted to the diminutive complementary determining regions (CDR) region, while the remainder of the antibody is largely invariant. Thus, we apply a computational mutational methodological approach to the study of the antigen-antibody complex formed between the hen egg white lysozyme (HEL) and the antibody HyHEL-10. A critical evaluation that focuses essentially on the limitations and advantages between different computational methods for hot spot determination, as well as between experimental and computational methodological approaches, is presented.
Pandiyan, K.; Tiwari, Rameshwar; Singh, Surender; Nain, Pawan K. S.; Rana, Sarika; Arora, Anju; Singh, Shashi B.; Nain, Lata
2014-01-01
Parthenium sp. is a noxious weed which threatens the environment and biodiversity due to its rapid invasion. This lignocellulosic weed was investigated for its potential in biofuel production by subjecting it to mild alkali pretreatment followed by enzymatic saccharification which resulted in significant amount of fermentable sugar yield (76.6%). Optimization of enzymatic hydrolysis variables such as temperature, pH, enzyme, and substrate loading was carried out using central composite design (CCD) in response to surface methodology (RSM) to achieve the maximum saccharification yield. Data obtained from RSM was validated using ANOVA. After the optimization process, a model was proposed with predicted value of 80.08% saccharification yield under optimum conditions which was confirmed by the experimental value of 85.80%. This illustrated a good agreement between predicted and experimental response (saccharification yield). The saccharification yield was enhanced by enzyme loading and reduced by temperature and substrate loading. This study reveals that under optimized condition, sugar yield was significantly increased which was higher than earlier reports and promises the use of Parthenium sp. biomass as a feedstock for bioethanol production. PMID:24900917
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Roopa, N; Chauhan, O P; Raju, P S; Das Gupta, D K; Singh, R K R; Bawa, A S
2014-10-01
An osmotic-dehydration process protocol for Carambola (Averrhoacarambola L.,), an exotic star shaped tropical fruit, was developed. The process was optimized using Response Surface Methodology (RSM) following Central Composite Rotatable Design (CCRD). The experimental variables selected for the optimization were soak solution concentration (°Brix), soaking temperature (°C) and soaking time (min) with 6 experiments at central point. The effect of process variables was studied on solid gain and water loss during osmotic dehydration process. The data obtained were analyzed employing multiple regression technique to generate suitable mathematical models. Quadratic models were found to fit well (R(2), 95.58 - 98.64 %) in describing the effect of variables on the responses studied. The optimized levels of the process variables were achieved at 70°Brix, 48 °C and 144 min for soak solution concentration, soaking temperature and soaking time, respectively. The predicted and experimental results at optimized levels of variables showed high correlation. The osmo-dehydrated product prepared at optimized conditions showed a shelf-life of 10, 8 and 6 months at 5 °C, ambient (30 ± 2 °C) and 37 °C, respectively.
Jones, Jenny; Thomson, Patricia; Lauder, William; Leslie, Stephen J
2013-03-01
Reflexology is a complex massage intervention, based on the concept that specific areas of the feet (reflex points) correspond to individual internal organs within the body. Reflexologists trained in the popular Ingham reflexology method claim that massage to these points, using massage techniques unique to reflexology, stimulates an increase in blood supply to the corresponding organ. Reflexology researchers face two key methodological challenges that need to be addressed if a specific treatment-related hemodynamic effect is to be scientifically demonstrated. The first is the problem of inconsistent reflexology foot maps; the second is the issue of poor experimental controls. This article proposes a potential experimental solution that we believe can address both methodological challenges and in doing so, allow any specific hemodynamic treatment effect unique to reflexology to experimentally reveal itself.
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Measurement of operator workload in an information processing task
NASA Technical Reports Server (NTRS)
Jenney, L. L.; Older, H. J.; Cameron, B. J.
1972-01-01
This was an experimental study to develop an improved methodology for measuring workload in an information processing task and to assess the effects of shift length and communication density (rate of information flow) on the ability to process and classify verbal messages. Each of twelve subjects was exposed to combinations of three shift lengths and two communication densities in a counterbalanced, repeated measurements experimental design. Results indicated no systematic variation in task performance measures or in other dependent measures as a function of shift length or communication density. This is attributed to the absence of a secondary loading task, an insufficiently taxing work schedule, and the lack of psychological stress. Subjective magnitude estimates of workload showed fatigue (and to a lesser degree, tension) to be a power function of shift length. Estimates of task difficulty and fatigue were initially lower but increased more sharply over time under low density than under high density conditions. An interpretation of findings and recommedations for furture research are included. This research has major implications to human workload problems in information processing of air traffic control verbal data.
NASA Astrophysics Data System (ADS)
Cuetos, M. J.; Gómez, X.; Escapa, A.; Morán, A.
Various mixtures incorporating a simulated organic fraction of municipal solid wastes and blood from a poultry slaughterhouse were used as substrate in a dark fermentation process for the production of hydrogen. The individual and interactive effects of hydraulic retention time (HRT), solid content in the feed (%TS) and proportion of residues (%Blood) on bio-hydrogen production were studied in this work. A central composite design and response surface methodology were employed to determine the optimum conditions for the hydrogen production process. Experimental results were approximated to a second-order model with the principal effects of the three factors considered being statistically significant (P < 0.05). The production of hydrogen obtained from the experimental point at conditions close to best operability was 0.97 L Lr -1 day -1. Moreover, a desirability function was employed in order to optimize the process when a second, methanogenic, phase is coupled with it. In this last case, the optimum conditions lead to a reduction in the production of hydrogen when the optimization process involves the maximization of intermediary products.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
Wang, Ya-Qi; Wu, Zhen-Feng; Ke, Gang; Yang, Ming
2014-12-31
An effective vacuum assisted extraction (VAE) technique was proposed for the first time and applied to extract bioactive components from Andrographis paniculata. The process was carefully optimized by response surface methodology (RSM). Under the optimized experimental conditions, the best results were obtained using a boiling temperature of 65 °C, 50% ethanol concentration, 16 min of extraction time, one extraction cycles and a 12:1 liquid-solid ratio. Compared with conventional ultrasonic assisted extraction and heat reflux extraction, the VAE technique gave shorter extraction times and remarkable higher extraction efficiency, which indicated that a certain degree of vacuum gave the solvent a better penetration of the solvent into the pores and between the matrix particles, and enhanced the process of mass transfer. The present results demonstrated that VAE is an efficient, simple and fast method for extracting bioactive components from A. paniculata, which shows great potential for becoming an alternative technique for industrial scale-up applications.
Matias-Guiu, Pau; Rodríguez-Bencomo, Juan José; Pérez-Correa, José R; López, Francisco
2018-04-15
Developing new distillation strategies can help the spirits industry to improve quality, safety and process efficiency. Batch stills equipped with a packed column and an internal partial condenser are an innovative experimental system, allowing a fast and flexible management of the rectification. In this study, the impact of four factors (heart-cut volume, head-cut volume, pH and cooling flow rate of the internal partial condenser during the head-cut fraction) on 18 major volatile compounds of Muscat spirits was optimized using response surface methodology and desirability function approaches. Results have shown that high rectification at the beginning of the heart-cut enhances the overall positive aroma compounds of the product, reducing off-flavor compounds. In contrast, optimum levels of heart-cut volume, head-cut volume and pH factors varied depending on the process goal. Finally, three optimal operational conditions (head off-flavors reduction, flowery terpenic enhancement and fruity ester enhancement) were evaluated by chemical and sensory analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heaney, Mike
Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introducedmore » and finally a case study will be presented to demonstrate this methodology.« less
Optimization of the Alkaline Pretreatment of Rice Straw for Enhanced Methane Yield
Song, Zilin; Yang, Gaihe; Han, Xinhui; Feng, Yongzhong; Ren, Guangxin
2013-01-01
The lime pretreatment process for rice straw was optimized to enhance the biodegradation performance and increase biogas yield. The optimization was implemented using response surface methodology (RSM) and Box-Behnken experimental design. The effects of biodegradation, as well as the interactive effects of Ca(OH)2 concentration, pretreatment time, and inoculum amount on biogas improvement, were investigated. Rice straw compounds, such as lignin, cellulose, and hemicellulose, were significantly degraded with increasing Ca(OH)2 concentration. The optimal conditions for the use of pretreated rice straw in anaerobic digestion were 9.81% Ca(OH)2 (w/w TS), 5.89 d treatment time, and 45.12% inoculum content, which resulted in a methane yield of 225.3 mL/g VS. A determination coefficient (R 2) of 96% was obtained, indicating that the model used to predict the anabolic digestion process shows a favorable fit with the experimental parameters. PMID:23509824
Gomariz, María; Blaya, Salvador; Acebal, Pablo; Carretero, Luis
2014-01-01
We theoretically and experimentally analyze the formation of thick Purple Membrane (PM) polyacrylamide (PA) films by means of optical spectroscopy by considering the absorption of bacteriorhodopsin and scattering. We have applied semiclassical quantum mechanical techniques for the calculation of absorption spectra by taking into account the Fano effects on the ground state of bacteriorhodopsin. A model of the formation of PM-polyacrylamide films has been proposed based on the growth of polymeric chains around purple membrane. Experimentally, the temporal evolution of the polymerization process of acrylamide has been studied as function of the pH solution, obtaining a good correspondence to the proposed model. Thus, due to the formation of intermediate bacteriorhodopsin-doped nanogel, by controlling the polymerization process, an alternative methodology for the synthesis of bacteriorhodopsin-doped nanogels can be provided. PMID:25329473
Gomariz, María; Blaya, Salvador; Acebal, Pablo; Carretero, Luis
2014-01-01
We theoretically and experimentally analyze the formation of thick Purple Membrane (PM) polyacrylamide (PA) films by means of optical spectroscopy by considering the absorption of bacteriorhodopsin and scattering. We have applied semiclassical quantum mechanical techniques for the calculation of absorption spectra by taking into account the Fano effects on the ground state of bacteriorhodopsin. A model of the formation of PM-polyacrylamide films has been proposed based on the growth of polymeric chains around purple membrane. Experimentally, the temporal evolution of the polymerization process of acrylamide has been studied as function of the pH solution, obtaining a good correspondence to the proposed model. Thus, due to the formation of intermediate bacteriorhodopsin-doped nanogel, by controlling the polymerization process, an alternative methodology for the synthesis of bacteriorhodopsin-doped nanogels can be provided.
Processing of pictorial food stimuli in patients with eating disorders--a systematic review.
Giel, Katrin Elisabeth; Teufel, Martin; Friederich, Hans-Christoph; Hautzinger, Martin; Enck, Paul; Zipfel, Stephan
2011-03-01
The processing of food cues in eating-disordered patients has recently been increasingly investigated. Outlined is current evidence from pictorial food stimuli studies. PubMed and PsychINFO were searched for quantitative pictorial food stimuli studies investigating healthy controls and expert-diagnosed eating-disordered patients. Patients with eating disorders (ED) demonstrated cue reactivity to food stimuli. Results from functional imaging suggest sensory disengagement and higher emotional involvement while self-reported data and facial EMG revealed that food pictures were perceived as less pleasurable. Different experimental paradigms have demonstrated an attentional bias for food cues in ED. Currently, psychophysiological data is widely inconclusive. Evidence suggests cue reactivity to food pictures in eating-disordered patients. However, the overall picture is inconclusive because methodological problems and the integration of findings from different experimental approaches pose a challenge to the research field. Copyright © 2009 Wiley Periodicals, Inc.
Application of Plackett-Burman experimental design in the development of muffin using adlay flour
NASA Astrophysics Data System (ADS)
Valmorida, J. S.; Castillo-Israel, K. A. T.
2018-01-01
The application of Plackett-Burman experimental design was made to identify significant formulation and process variables in the development of muffin using adlay flour. Out of the seven screened variables, levels of sugar, levels of butter and baking temperature had the most significant influence on the product model in terms of physicochemical and sensory acceptability. Results of the experiment further demonstrate the effectiveness of Plackett-Burman design in choosing the best adlay variety for muffin production. Hence, the statistical method used in the study permits an efficient selection of important variables needed in the development of muffin from adlay which can be optimized using response surface methodology.
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H
2017-07-03
For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.
Mortar radiocarbon dating: preliminary accuracy evaluation of a novel methodology.
Marzaioli, Fabio; Lubritto, Carmine; Nonni, Sara; Passariello, Isabella; Capano, Manuela; Terrasi, Filippo
2011-03-15
Mortars represent a class of building and art materials that are widespread at archeological sites from the Neolithic period on. After about 50 years of experimentation, the possibility to evaluate their absolute chronology by means of radiocarbon ((14)C) remains still uncertain. With the use of a simplified mortar production process in the laboratory environment, this study shows the overall feasibility of a novel physical pretreatment for the isolation of the atmospheric (14)CO(2) (i.e., binder) signal absorbed by the mortars during their setting. This methodology is based on the assumption that an ultrasonic attack in liquid phase isolates a suspension of binder carbonates from bulk mortars. Isotopic ((13)C and (14)C), % C, X-ray diffractometry (XRD), and scanning electron microscopy (SEM) analyses were performed to characterize the proposed methodology. The applied protocol allows suppression of the fossil carbon (C) contamination originating from the incomplete burning of the limestone during the quick lime production, providing unbiased dating for "laboratory" mortars produced operating at historically adopted burning temperatures.
Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon
2012-09-01
A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.
NASA Astrophysics Data System (ADS)
Mallinson, Christopher F.
Beryllium is an important metal in the nuclear industry for which there are no suitable replacements. It undergoes localised corrosion at the site of heterogeneities in the metal surface. Corrosion pits are associated with a range of second phase particles. To investigate the role of these particles in corrosion, a safe experimental protocol was established using an aluminium alloy as a corrosion material analogue. The 7075-T6 alloy had not previously been investigated using the experimental methodology used in this thesis. This work led to the development of the experimental methodology and safe working practices for handling beryllium. The range and composition of the second phase particles present in S-65 beryllium billet were identified using a combination of SEM, AES, EDX and WDX. Following the identification of a range of particles with various compositions, including the AlFeBe4 precipitate which has been previously associated with corrosion, the location of the particles were marked to enable their repeated study. Attention was focused on the microchemistry in the vicinity of second phase particles, as a function of immersion time in pH 7, 0.1 M NaCl solution. The corrosion process associated with different particles was followed by repeatedly relocating the particles to perform analysis by means of SEM, AES and EDX. The use of traditional chlorinated vapour degreasing solvents on beryllium was investigated and compared to two modern commercially available cleaning solutions designed as drop-in replacements. This work expanded the range of solvents suitable for cleaning beryllium and validated the conclusions from previous thermodynamic modelling. Additionally, a new experimental methodology has been developed which enables the acquisition of chemical state information from the surface of micron scale features. This was applied to sub-micron copper and iron particles, as well as a copper intermetallic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hills, Richard G.; Maniaci, David Charles; Naughton, Jonathan W.
2015-09-01
A Verification and Validation (V&V) framework is presented for the development and execution of coordinated modeling and experimental program s to assess the predictive capability of computational models of complex systems through focused, well structured, and formal processes.The elements of the framework are based on established V&V methodology developed by various organizations including the Department of Energy, National Aeronautics and Space Administration, the American Institute of Aeronautics and Astronautics, and the American Society of Mechanical Engineers. Four main topics are addressed: 1) Program planning based on expert elicitation of the modeling physics requirements, 2) experimental design for model assessment, 3)more » uncertainty quantification for experimental observations and computational model simulations, and 4) assessment of the model predictive capability. The audience for this document includes program planners, modelers, experimentalist, V &V specialist, and customers of the modeling results.« less
Integrated CFD modeling of gas turbine combustors
NASA Technical Reports Server (NTRS)
Fuller, E. J.; Smith, C. E.
1993-01-01
3D, curvilinear, multi-domain CFD analysis is becoming a valuable tool in gas turbine combustor design. Used as a supplement to experimental testing. CFD analysis can provide improved understanding of combustor aerodynamics and used to qualitatively assess new combustor designs. This paper discusses recent advancements in CFD combustor methodology, including the timely integration of the design (i.e. CAD) and analysis (i.e. CFD) processes. Allied Signal's F124 combustor was analyzed at maximum power conditions. The assumption of turbulence levels at the nozzle/swirler inlet was shown to be very important in the prediction of combustor exit temperatures. Predicted exit temperatures were compared to experimental rake data, and good overall agreement was seen. Exit radial temperature profiles were well predicted, while the predicted pattern factor was 25 percent higher than the harmonic-averaged experimental pattern factor.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
2017-11-01
Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.
NASA Astrophysics Data System (ADS)
Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.
2017-10-01
In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.
Didier, Caroline; Forno, Guillermina; Etcheverrigaray, Marina; Kratje, Ricardo; Goicoechea, Héctor
2009-09-21
The optimal blends of six compounds that should be present in culture media used in recombinant protein production were determined by means of artificial neural networks (ANN) coupled with crossed mixture experimental design. This combination constitutes a novel approach to develop a medium for cultivating genetically engineered mammalian cells. The compounds were collected in two mixtures of three elements each, and the experimental space was determined by a crossed mixture design. Empirical data from 51 experimental units were used in a multiresponse analysis to train artificial neural networks which satisfy different requirements, in order to define two new culture media (Medium 1 and Medium 2) to be used in a continuous biopharmaceutical production process. These media were tested in a bioreactor to produce a recombinant protein in CHO cells. Remarkably, for both predicted media all responses satisfied the predefined goals pursued during the analysis, except in the case of the specific growth rate (mu) observed for Medium 1. ANN analysis proved to be a suitable methodology to be used when dealing with complex experimental designs, as frequently occurs in the optimization of production processes in the biotechnology area. The present work is a new example of the use of ANN for the resolution of a complex, real life system, successfully employed in the context of a biopharmaceutical production process.
Experimental Methods in Reduced-gravity Soldering Research
NASA Technical Reports Server (NTRS)
Pettegrew, Richard D.; Struk, Peter M.; Watson, John K.; Haylett, Daniel R.
2002-01-01
The National Center for Microgravity Research, NASA Glenn Research Center, and NASA Johnson Space Center are conducting an experimental program to explore the influence of reduced gravity environments on the soldering process. An improved understanding of the effects of the acceleration environment is important to application of soldering during current and future human space missions. Solder joint characteristics that are being considered include solder fillet geometry, porosity, and microstructural features. Both through-hole and surface mounted devices are being investigated. This paper focuses on the experimental methodology employed in this project and the results of macroscopic sample examination. The specific soldering process, sample configurations, materials, and equipment were selected to be consistent with those currently on-orbit. Other apparatus was incorporated to meet requirements imposed by operation onboard NASA's KC-135 research aircraft and instrumentation was provided to monitor both the atmospheric and acceleration environments. The contingent of test operators was selected to include both highly skilled technicians and less skilled individuals to provide a population cross-section that would be representative of the skill mix that might be encountered in space mission crews.
Overview of the Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel; Florance, Jennifer P.; Wieseman, Carol D.; Schuster, David M.; Perry, Raleigh B.
2013-01-01
The Aeroelastic Prediction Workshop brought together an international community of computational fluid dynamicists as a step in defining the state of the art in computational aeroelasticity. This workshop's technical focus was prediction of unsteady pressure distributions resulting from forced motion, benchmarking the results first using unforced system data. The most challenging aspects of the physics were identified as capturing oscillatory shock behavior, dynamic shock-induced separated flow and tunnel wall boundary layer influences. The majority of the participants used unsteady Reynolds-averaged Navier Stokes codes. These codes were exercised at transonic Mach numbers for three configurations and comparisons were made with existing experimental data. Substantial variations were observed among the computational solutions as well as differences relative to the experimental data. Contributing issues to these differences include wall effects and wall modeling, non-standardized convergence criteria, inclusion of static aeroelastic deflection, methodology for oscillatory solutions, post-processing methods. Contributing issues pertaining principally to the experimental data sets include the position of the model relative to the tunnel wall, splitter plate size, wind tunnel expansion slot configuration, spacing and location of pressure instrumentation, and data processing methods.
A methodology for double patterning compliant split and design
NASA Astrophysics Data System (ADS)
Wiaux, Vincent; Verhaegen, Staf; Iwamoto, Fumio; Maenhoudt, Mireille; Matsuda, Takashi; Postnikov, Sergei; Vandenberghe, Geert
2008-11-01
Double Patterning allows to further extend the use of water immersion lithography at its maximum numerical aperture NA=1.35. Splitting of design layers to recombine through Double Patterning (DP) enables an effective resolution enhancement. Single polygons may need to be split up (cut) depending on the pattern density and its 2D content. The split polygons recombine at the so-called 'stitching points'. These stitching points may affect the yield due to the sensitivity to process variations. We describe a methodology to ensure a robust double patterning by identifying proper split- and design- guidelines. Using simulations and experimental data, we discuss in particular metal1 first interconnect layers of random LOGIC and DRAM applications at 45nm half-pitch (hp) and 32nm hp where DP may become the only timely patterning solution.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1993-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
In situ study of live specimens in an environmental scanning electron microscope.
Tihlaříková, Eva; Neděla, Vilém; Shiojiri, Makoto
2013-08-01
In this paper we introduce new methodology for the observation of living biological samples in an environmental scanning electron microscope (ESEM). The methodology is based on an unconventional initiation procedure for ESEM chamber pumping, free from purge-flood cycles, and on the ability to control thermodynamic processes close to the sample. The gradual and gentle change of the working environment from air to water vapor enables the study of not only living samples in dynamic in situ experiments and their manifestation of life (sample walking) but also its experimentally stimulated physiological reactions. Moreover, Monte Carlo simulations of primary electron beam energy losses in a water layer on the sample surface were studied; consequently, the influence of the water thickness on radiation, temperature, or chemical damage of the sample was considered.
NASA Astrophysics Data System (ADS)
Scott, Elaine P.
1993-12-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
Perez, Pablo A; Hintelman, Holger; Quiroz, Waldo; Bravo, Manuel A
2017-11-01
In the present work, the efficiency of distillation process for extracting monomethylmercury (MMHg) from soil samples was studied and optimized using an experimental design methodology. The influence of soil composition on MMHg extraction was evaluated by testing of four soil samples with different geochemical characteristics. Optimization suggested that the acid concentration and the duration of the distillation process were most significant and the most favorable conditions, established as a compromise for the studied soils, were determined to be a 70 min distillation using an 0.2 M acid. Corresponding limits of detection (LOD) and quantification (LOQ) were 0.21 and 0.7 pg absolute, respectively. The optimized methodology was applied with satisfactory results to soil samples and was compared to a reference methodology based on isotopic dilution analysis followed by gas chromatography-inductively coupled plasma mass spectrometry (IDA-GC-ICP-MS). Using the optimized conditions, recoveries ranged from 82 to 98%, which is an increase of 9-34% relative to the previously used standard operating procedure. Finally, the validated methodology was applied to quantify MMHg in soils collected from different sites impacted by coal fired power plants in the north-central zone of Chile, measuring MMHg concentrations ranging from 0.091 to 2.8 ng g -1 . These data are to the best of our knowledge the first MMHg measurements reported for Chile. Copyright © 2017 Elsevier Ltd. All rights reserved.
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
Methodological standards in single-case experimental design: Raising the bar.
Ganz, Jennifer B; Ayres, Kevin M
2018-04-12
Single-case experimental designs (SCEDs), or small-n experimental research, are frequently implemented to assess approaches to improving outcomes for people with disabilities, particularly those with low-incidence disabilities, such as some developmental disabilities. SCED has become increasingly accepted as a research design. As this literature base is needed to determine what interventions are evidence-based practices, the acceptance of SCED has resulted in increased critiques with regard to methodological quality. Recent trends include recommendations from a number of expert scholars and institutions. The purpose of this article is to summarize the recent history of methodological quality considerations, synthesize the recommendations found in the SCED literature, and provide recommendations to researchers designing SCEDs with regard to essential and aspirational standards for methodological quality. Conclusions include imploring SCED to increase the quality of their experiments, with particular consideration regarding the applied nature of SCED research to be published in Research in Developmental Disabilities and beyond. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hafizzal, Y.; Nurulhuda, A.; Izman, S.; Khadir, AZA
2017-08-01
POM-copolymer bond breaking leads to change depending with respect to processing methodology and material geometries. This paper present the oversights effect on the material integrity due to different geometries and processing methodology. Thermo-analytical methods with reference were used to examine the degradation of thermomechanical while Thermogravimetric Analysis (TGA) was used to judge the thermal stability of sample from its major decomposition temperature. Differential Scanning Calorimetry (DSC) investigation performed to identify the thermal behaviour and thermal properties of materials. The result shown that plastic gear geometries with injection molding at higher tonnage machine more stable thermally rather than resin geometries. Injection plastic gear geometries at low tonnage machine faced major decomposition temperatures at 313.61°C, 305.76 °C and 307.91 °C while higher tonnage processing method are fully decomposed at 890°C, significantly higher compared to low tonnage condition and resin geometries specimen at 398°C. Chemical composition of plastic gear geometries with injection molding at higher and lower tonnage are compare based on their moisture and Volatile Organic Compound (VOC) content, polymeric material content and the absence of filler. Results of higher moisture and Volatile Organic Compound (VOC) content are report in resin geometries (0.120%) compared to higher tonnage of injection plastic gear geometries which is 1.264%. The higher tonnage of injection plastic gear geometry are less sensitive to thermo-mechanical degradation due to polymer chain length and molecular weight of material properties such as tensile strength, flexural strength, fatigue strength and creep resistance.
Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation
Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem
2017-01-01
Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348
Modelling of the rotational moulding process for the manufacture of plastic products
NASA Astrophysics Data System (ADS)
Khoon, Lim Kok
The present research is mainly focused on two-dimensional non-linear thermal modelling, numerical procedures and software development for the rotational moulding process. The RotoFEM program is developed for the rotational moulding process using finite element procedures. The program is written in the MATLAB environment. The research includes the development of new slip flow models, phase change study, warpage study and process analyses. A new slip flow methodology is derived for the heat transfer problem inside the enclosed rotating mould during the heating stage of the tumbling powder. The methodology enables the discontinuous powder to be modelled by the continuous-based finite element method. The Galerkin Finite Element Method is incorporated with the lumped-parameter system and the coincident node technique in finding the multi-interacting heat transfer solutions inside the mould. Two slip flow models arise from the slip flow methodology; they are SDM (single-layered deposition method) and MDM (multi-layered deposition method). These two models have differences in their thermal description for the internal air energy balance and the computational procedure for the deposition of the molten polymer. The SDM model assumes the macroscopic deposition of the molten polymer bed exists only between the bed and the inner mould surface. On the other hand, the MDM model allows the layer-by-layer deposition of the molten polymer bed macroscopically. In addition, the latter has a more detailed heat transfer description for the internal air inside the mould during the powder heating cycle. In slip flow models, the semi-implicit approach has been introduced to solve the final quasi-equilibrium internal air temperature during the heating cycle. A notable feature of this slip flow methodology is that the slip flow models are capable of producing good results for the internal air at the heating powder stage, without the consideration of the powder movement and changeable powder mass. This makes the modelling of the rotational moulding process much simpler. In the simulation of the cooling stage in rotational moulding, the thermal aspects of the inherent warpage problem and external-internal cooling method have been explored. The predicted internal air temperature profiles have shown that the less apparent crystallization plateau in the experimental internal air in practice could be related to warpage. Various phase change algorithms have been reviewed and compared, and thus the most convenient and considerable effective algorithm is proposed. The dimensional analysis method, expressed by means of dimensionless combinations of physical, boundary, and time variables, is utilized to study the dependence of the key thermal parameters on the processing times of rotational moulding. Lastly, the predicted results have been compared with the experimental results from two different external resources. The predicted temperature profiles of the internal air, oven times and other process conditions are consistent with the available data.
NASA Technical Reports Server (NTRS)
Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai
2011-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai
2012-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
Sadowski, Lukasz
2013-01-01
In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm × 750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.
Chaita, Eliza; Gikas, Evagelos; Aligiannis, Nektarios
2017-03-01
In drug discovery, bioassay-guided isolation is a well-established procedure, and still the basic approach for the discovery of natural products with desired biological properties. However, in these procedures, the most laborious and time-consuming step is the isolation of the bioactive constituents. A prior identification of the compounds that contribute to the demonstrated activity of the fractions would enable the selection of proper chromatographic techniques and lead to targeted isolation. The development of an integrated HPTLC-based methodology for the rapid tracing of the bioactive compounds during bioassay-guided processes, using multivariate statistics. Materials and Methods - The methanol extract of Morus alba was fractionated employing CPC. Subsequently, fractions were assayed for tyrosinase inhibition and analyzed with HPTLC. PLS-R algorithm was performed in order to correlate the analytical data with the biological response of the fractions and identify the compounds with the highest contribution. Two methodologies were developed for the generation of the dataset; one based on manual peak picking and the second based on chromatogram binning. Results and Discussion - Both methodologies afforded comparable results and were able to trace the bioactive constituents (e.g. oxyresveratrol, trans-dihydromorin, 2,4,3'-trihydroxydihydrostilbene). The suggested compounds were compared in terms of R f values and UV spectra with compounds isolated from M. alba using typical bioassay-guided process. Chemometric tools supported the development of a novel HPTLC-based methodology for the tracing of tyrosinase inhibitors in M. alba extract. All steps of the experimental procedure implemented techniques that afford essential key elements for application in high-throughput screening procedures for drug discovery purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Expanding the Extent of a UMLS Semantic Type via Group Neighborhood Auditing
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Halper, Michael; Xu, Junchuan
2009-01-01
Objective Each Unified Medical Language System (UMLS) concept is assigned one or more semantic types (ST). A dynamic methodology for aiding an auditor in finding concepts that are missing the assignment of a given ST, S is presented. Design The first part of the methodology exploits the previously introduced Refined Semantic Network and accompanying refined semantic types (RST) to help narrow the search space for offending concepts. The auditing is focused in a neighborhood surrounding the extent of an RST, T (of S) called an envelope, consisting of parents and children of concepts in the extent. The audit moves outward as long as missing assignments are discovered. In the second part, concepts not reached previously are processed and reassigned T as needed during the processing of S's other RSTs. The set of such concepts is expanded in a similar way to that in the first part. Measurements The number of errors discovered is reported. To measure the methodology's efficiency, “error hit rates” (i.e., errors found in concepts examined) are computed. Results The methodology was applied to three STs: Experimental Model of Disease (EMD), Environmental Effect of Humans, and Governmental or Regulatory Activity. The EMD experienced the most drastic change. For its RST “EMD ∩ Neoplastic Process” (RST “EMD”) with only 33 (31) original concepts, 915 (134) concepts were found by the first (second) part to be missing the EMD assignment. Changes to the other two STs were smaller. Conclusion The results show that the proposed auditing methodology can help to effectively and efficiently identify concepts lacking the assignment of a particular semantic type. PMID:19567802
Ashengroph, Morahem; Ababaf, Sajad
2014-12-01
Microbial caffeine removal is a green solution for treatment of caffeinated products and agro-industrial effluents. We directed this investigation to optimizing a bio-decaffeination process with growing cultures of Pseudomonas pseudoalcaligenes through Taguchi methodology which is a structured statistical approach that can be lowered variations in a process through Design of Experiments (DOE). Five parameters, i.e. initial fructose, tryptone, Zn(+2) ion and caffeine concentrations and also incubation time selected and an L16 orthogonal array was applied to design experiments with four 4-level factors and one 3-level factor (4(4) × 1(3)). Data analysis was performed using the statistical analysis of variance (ANOVA) method. Furthermore, the optimal conditions were determined by combining the optimal levels of the significant factors and verified by a confirming experiment. Measurement of residual caffeine concentration in the reaction mixture was performed using high-performance liquid chromatography (HPLC). Use of Taguchi methodology for optimization of design parameters resulted in about 86.14% reduction of caffeine in 48 h incubation when 5g/l fructose, 3 mM Zn(+2) ion and 4.5 g/l of caffeine are present in the designed media. Under the optimized conditions, the yield of degradation of caffeine (4.5 g/l) by the native strain of Pseudomonas pseudoalcaligenes TPS8 has been increased from 15.8% to 86.14% which is 5.4 fold higher than the normal yield. According to the experimental results, Taguchi methodology provides a powerful methodology for identifying the favorable parameters on caffeine removal using strain TPS8 which suggests that the approach also has potential application with similar strains to improve the yield of caffeine removal from caffeine containing solutions.
DOT National Transportation Integrated Search
1998-04-01
A methodology is presented for the prediction of delamination growth in laminated structures. The methodology is aimed at overcoming computational difficulties in the determination of energy release rate and mode mix. It also addresses the issue that...
Measurement Techniques for Respiratory Tract Deposition of Airborne Nanoparticles: A Critical Review
Möller, Winfried; Pagels, Joakim H.; Kreyling, Wolfgang G.; Swietlicki, Erik; Schmid, Otmar
2014-01-01
Abstract Determination of the respiratory tract deposition of airborne particles is critical for risk assessment of air pollution, inhaled drug delivery, and understanding of respiratory disease. With the advent of nanotechnology, there has been an increasing interest in the measurement of pulmonary deposition of nanoparticles because of their unique properties in inhalation toxicology and medicine. Over the last century, around 50 studies have presented experimental data on lung deposition of nanoparticles (typical diameter≤100 nm, but here≤300 nm). These data show a considerable variability, partly due to differences in the applied methodologies. In this study, we review the experimental techniques for measuring respiratory tract deposition of nano-sized particles, analyze critical experimental design aspects causing measurement uncertainties, and suggest methodologies for future studies. It is shown that, although particle detection techniques have developed with time, the overall methodology in respiratory tract deposition experiments has not seen similar progress. Available experience from previous research has often not been incorporated, and some methodological design aspects that were overlooked in 30–70% of all studies may have biased the experimental data. This has contributed to a significant uncertainty on the absolute value of the lung deposition fraction of nanoparticles. We estimate the impact of the design aspects on obtained data, discuss solutions to minimize errors, and highlight gaps in the available experimental set of data. PMID:24151837
Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Zhenhua; Yan, Binhang; Zhang, Li
In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.
Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation
Xie, Zhenhua; Yan, Binhang; Zhang, Li; ...
2017-01-25
In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.
Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L
2015-03-01
In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gonzalez, Edurne; Tollan, Christopher; Chuvilin, Andrey; Barandiaran, Maria J; Paulis, Maria
2012-08-01
A new methodology for quantitative characterization of the coalescence process of waterborne polymer dispersion (latex) particles by environmental scanning electron microscopy (ESEM) is proposed. The experimental setup has been developed to provide reproducible latex monolayer depositions, optimized contrast of the latex particles, and a reliable readout of the sample temperature. Quantification of the coalescence process under dry conditions has been performed by image processing based on evaluation of the image autocorrelation function. As a proof of concept the coalescence of two latexes with known and differing glass transition temperatures has been measured. It has been shown that a reproducibility of better than 1.5 °C can be obtained for the measurement of the coalescence temperature.
Science and Television Commercials: Adding Relevance to the Research Methodology Course.
ERIC Educational Resources Information Center
Solomon, Paul R.
1979-01-01
Contends that research methodology courses can be relevant to issues outside of psychology and describes a method which relates the course to consumer problems. Students use experimental methodology to test claims made in television commercials advertising deodorant, bathroom tissues, and soft drinks. (KC)
How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment
NASA Astrophysics Data System (ADS)
Baker, Lisa M.
While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.
Cultural Heritage Reconstruction from Historical Photographs and Videos
NASA Astrophysics Data System (ADS)
Condorelli, F.; Rinaudo, F.
2018-05-01
Historical archives save invaluable treasures and play a critical role in the conservation of Cultural Heritage. Old photographs and videos, which have survived over time and stored in these archives, preserve traces of architecture and urban transformation and, in many cases, are the only evidence of buildings that no longer exist. They are a precious source of enormous informative potential in Cultural Heritage documentation and save invaluable treasures. Thanks to photogrammetric techniques it is possible to extract metric information from these sources useful for 3D virtual reconstructions of monuments and historic buildings. This paper explores the ways to search for, classify and group historical data by considering their possible use in metric documentation and aims to provide an overview of criticality and open issues of the methodologies that could be used to process these data. A practical example is described and presented as a case study. The video "Torino 1928", an old movie dating from the 1930s, was processed for reconstructing the temporary pavilions of the "Exposition" held in Turin in 1928. Despite the initial concerns relating to processing this kind of data, the experimental methodology used in this research has allowed to reach a quality of results of acceptable standard.
NASA Astrophysics Data System (ADS)
Pal, Alok Ranjan; Saha, Diganta; Dash, Niladri Sekhar; Pal, Antara
2018-05-01
An attempt is made in this paper to report how a supervised methodology has been adopted for the task of word sense disambiguation in Bangla with necessary modifications. At the initial stage, the Naïve Bayes probabilistic model that has been adopted as a baseline method for sense classification, yields moderate result with 81% accuracy when applied on a database of 19 (nineteen) most frequently used Bangla ambiguous words. On experimental basis, the baseline method is modified with two extensions: (a) inclusion of lemmatization process into of the system, and (b) bootstrapping of the operational process. As a result, the level of accuracy of the method is slightly improved up to 84% accuracy, which is a positive signal for the whole process of disambiguation as it opens scope for further modification of the existing method for better result. The data sets that have been used for this experiment include the Bangla POS tagged corpus obtained from the Indian Languages Corpora Initiative, and the Bangla WordNet, an online sense inventory developed at the Indian Statistical Institute, Kolkata. The paper also reports about the challenges and pitfalls of the work that have been closely observed and addressed to achieve expected level of accuracy.
Design and implementation of the tree-based fuzzy logic controller.
Liu, B D; Huang, C Y
1997-01-01
In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.
Ligand diffusion in proteins via enhanced sampling in molecular dynamics.
Rydzewski, J; Nowak, W
2017-12-01
Computational simulations in biophysics describe the dynamics and functions of biological macromolecules at the atomic level. Among motions particularly important for life are the transport processes in heterogeneous media. The process of ligand diffusion inside proteins is an example of a complex rare event that can be modeled using molecular dynamics simulations. The study of physical interactions between a ligand and its biological target is of paramount importance for the design of novel drugs and enzymes. Unfortunately, the process of ligand diffusion is difficult to study experimentally. The need for identifying the ligand egress pathways and understanding how ligands migrate through protein tunnels has spurred the development of several methodological approaches to this problem. The complex topology of protein channels and the transient nature of the ligand passage pose difficulties in the modeling of the ligand entry/escape pathways by canonical molecular dynamics simulations. In this review, we report a methodology involving a reconstruction of the ligand diffusion reaction coordinates and the free-energy profiles along these reaction coordinates using enhanced sampling of conformational space. We illustrate the above methods on several ligand-protein systems, including cytochromes and G-protein-coupled receptors. The methods are general and may be adopted to other transport processes in living matter. Copyright © 2017 Elsevier B.V. All rights reserved.
Sanchez-Segado, Sergio; Monti, Tamara; Katrib, Juliano; Kingman, Samuel; Dodds, Chris; Jha, Animesh
2017-12-21
Current methodologies for the extraction of tantalum and niobium pose a serious threat to human beings and the environment due to the use of hydrofluoric acid (HF). Niobium and tantalum metal powders and pentoxides are widely used for energy efficient devices and components. However, the current processing methods for niobium and tantalum metals and oxides are energy inefficient. This dichotomy between materials use for energy applications and their inefficient processing is the main motivation for exploring a new methodology for the extraction of these two oxides, investigating the microwave absorption properties of the reaction products formed during the alkali roasting of niobium-tantalum bearing minerals with sodium bicarbonate. The experimental findings from dielectric measurement at elevated temperatures demonstrate an exponential increase in the values of the dielectric properties as a result of the formation of NaNbO 3 -NaTaO 3 solid solutions at temperatures above 700 °C. The investigation of the evolution of the dielectric properties during the roasting reaction is a key feature in underpinning the mechanism for designing a new microwave assisted high-temperature process for the selective separation of niobium and tantalum oxides from the remainder mineral crystalline lattice.
Visualization of Underfill Flow in Ball Grid Array (BGA) using Particle Image Velocimetry (PIV)
NASA Astrophysics Data System (ADS)
Ng, Fei Chong; Abas, Aizat; Abustan, Ismail; Remy Rozainy, Z. Mohd; Abdullah, MZ; Jamaludin, Ali b.; Kon, Sharon Melissa
2018-05-01
This paper presents the experimental methodology using particle image velocimetry (PIV) to study the underfill process of ball grid array (BGA) chip package. PIV is a non-intrusive approach to visualize the flow behavior of underfill across the solder ball array. The BGA model of three different configurations – perimeter, middle empty and full array – were studied in current research. Through PIV experimental works, the underfill velocity distribution and vector fields for each BGA models were successfully obtained. It is found that perimeter has the shortest filling time resulting to a higher underfill velocity. Therefore, it is concluded that the flow behavior of underfill in BGA can be justified thoroughly with the aid of PIV.
Biostatistical analysis of quantitative immunofluorescence microscopy images.
Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C
2016-12-01
Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Claeys, M.; Sinou, J.-J.; Lambelin, J.-P.; Todeschini, R.
2016-08-01
The nonlinear vibration response of an assembly with friction joints - named "Harmony" - is studied both experimentally and numerically. The experimental results exhibit a softening effect and an increase of dissipation with excitation level. Modal interactions due to friction are also evidenced. The numerical methodology proposed groups together well-known structural dynamic methods, including finite elements, substructuring, Harmonic Balance and continuation methods. On the one hand, the application of this methodology proves its capacity to treat a complex system where several friction movements occur at the same time. On the other hand, the main contribution of this paper is the experimental and numerical study of evidence of modal interactions due to friction. The simulation methodology succeeds in reproducing complex form of dynamic behavior such as these modal interactions.
Critical Thinking: Comparing Instructional Methodologies in a Senior-Year Learning Community
ERIC Educational Resources Information Center
Zelizer, Deborah A.
2013-01-01
This quasi-experimental, nonequivalent control group study compared the impact of Ennis's (1989) mixed instructional methodology to the immersion methodology on the development of critical thinking in a multicultural, undergraduate senior-year learning community. A convenience sample of students (n =171) were selected from four sections of a…
Laboratory investigations of earthquake dynamics
NASA Astrophysics Data System (ADS)
Xia, Kaiwen
In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.
Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana
2015-09-28
The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.
Bayesian analysis of experimental epidemics of foot-and-mouth disease.
Streftaris, George; Gibson, Gavin J.
2004-01-01
We investigate the transmission dynamics of a certain type of foot-and-mouth disease (FMD) virus under experimental conditions. Previous analyses of experimental data from FMD outbreaks in non-homogeneously mixing populations of sheep have suggested a decline in viraemic level through serial passage of the virus, but these do not take into account possible variation in the length of the chain of viral transmission for each animal, which is implicit in the non-observed transmission process. We consider a susceptible-exposed-infectious-removed non-Markovian compartmental model for partially observed epidemic processes, and we employ powerful methodology (Markov chain Monte Carlo) for statistical inference, to address epidemiological issues under a Bayesian framework that accounts for all available information and associated uncertainty in a coherent approach. The analysis allows us to investigate the posterior distribution of the hidden transmission history of the epidemic, and thus to determine the effect of the length of the infection chain on the recorded viraemic levels, based on the posterior distribution of a p-value. Parameter estimates of the epidemiological characteristics of the disease are also obtained. The results reveal a possible decline in viraemia in one of the two experimental outbreaks. Our model also suggests that individual infectivity is related to the level of viraemia. PMID:15306359
Optimizing Force Deployment and Force Structure for the Rapid Deployment Force
1984-03-01
Analysis . . . . .. .. ... ... 97 Experimental Design . . . . . .. .. .. ... 99 IX. Use of a Flexible Response Surface ........ 10.2 Selection of a...setS . ere designe . arun, programming methodology , where the require: s.stem re..r is input and the model optimizes the num=er. :::pe, cargo. an...to obtain new computer outputs" (Ref 38:23). The methodology can be used with any decision model, linear or nonlinear. Experimental Desion Since the
Assessing Consequential Scenarios in a Complex Operational Environment Using Agent Based Simulation
2017-03-16
RWISE) 93 5.1.5 Conflict Modeling, Planning, and Outcomes Experimentation Program (COMPOEX) 94 5.1.6 Joint Non -Kinetic Effects Model (JNEM)/Athena... experimental design and testing. 4.3.8 Types and Attributes of Agent-Based Model Design Patterns Using the aforementioned ABM flowchart design methodology ...speed, or flexibility during tactical US Army wargaming. The report considers methodologies to improve analysis of the human domain, identifies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nardes, Alexandre M.; Ahn, Sungmo; Rourke, Devin
2016-12-01
We introduce a simple methodology to integrate prefabricated nanostructured-electrodes in solution-processed organic photovoltaic (OPV) devices. The tailored 'photonic electrode' nanostructure is used for light management in the device and for hole collection. This approach opens up new possibilities for designing photonically active structures that can enhance the absorption of sub-bandgap photons in the active layer. We discuss the design, fabrication and characterization of photonic electrodes, and the methodology for integrating them to OPV devices using a simple lamination technique. We demonstrate theoretically and experimentally that OPV devices using photonic electrodes show a factor of ca. 5 enhancement in external quantummore » efficiency (EQE) in the near infrared region. We use simulations to trace this observed efficiency enhancement to surface plasmon polariton modes in the nanostructure.« less
Hong, Chen; Haiyun, Wu
2010-07-01
Central-composite design (CCD) and response surface methodology (RSM) were used to optimize the parameters of volatile fatty acid (VFA) production from food wastes and dewatered excess sludge in a semi-continuous process. The effects of four variables (food wastes composition in the co-substrate of food wastes and excess sludge, hydraulic retention time (HRT), organic loading rate (OLR), and pH) on acidogenesis were evaluated individually and interactively. The optimum condition derived via RSM was food wastes composition, 88.03%; HRT, 8.92 days; OLR, 8.31 g VSS/ld; and pH 6.99. The experimental VFA concentration was 29,099 mg/l under this optimum condition, which was well in agreement with the predicted value of 28,000 mg/l. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
NASA Astrophysics Data System (ADS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.
2005-04-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.
PepsNMR for 1H NMR metabolomic data pre-processing.
Martin, Manon; Legat, Benoît; Leenders, Justine; Vanwinsberghe, Julien; Rousseau, Réjane; Boulanger, Bruno; Eilers, Paul H C; De Tullio, Pascal; Govaerts, Bernadette
2018-08-17
In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1 H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines. Copyright © 2018 Elsevier B.V. All rights reserved.
How to run an effective journal club: a systematic review.
Deenadayalan, Y; Grimmer-Somers, K; Prior, M; Kumar, S
2008-10-01
Health-based journal clubs have been in place for over 100 years. Participants meet regularly to critique research articles, to improve their understanding of research design, statistics and critical appraisal. However, there is no standard process of conducting an effective journal club. We conducted a systematic literature review to identify core processes of a successful health journal club. We searched a range of library databases using established keywords. All research designs were initially considered to establish the body of evidence. Experimental or comparative papers were then critically appraised for methodological quality and information was extracted on effective journal club processes. We identified 101 articles, of which 21 comprised the body of evidence. Of these, 12 described journal club effectiveness. Methodological quality was moderate. The papers described many processes of effective journal clubs. Over 80% papers reported that journal club intervention was effective in improving knowledge and critical appraisal skills. Few papers reported on the psychometric properties of their outcome instruments. No paper reported on the translation of evidence from journal club into clinical practice. Characteristics of successful journal clubs included regular and anticipated meetings, mandatory attendance, clear long- and short-term purpose, appropriate meeting timing and incentives, a trained journal club leader to choose papers and lead discussion, circulating papers prior to the meeting, using the internet for wider dissemination and data storage, using established critical appraisal processes and summarizing journal club findings.
The components of working memory updating: an experimental decomposition and individual differences.
Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Chatwin, Christopher R.; McDonald, Donald W.; Scott, Brian F.
1989-07-01
The absence of an applications led design philosophy has compromised both the development of laser source technology and its effective implementation into manufacturing technology in particular. For example, CO2 lasers are still incapable of processing classes of refractory and non-ferrous metals. Whilst the scope of this paper is restricted to high power CO2 lasers; the design methodology reported herein is applicable to source technology in general, which when exploited, will effect an expansion of applications. The CO2 laser operational envelope should not only be expanded to incorporate high damage threshold materials but also offer a greater degree of controllability. By a combination of modelling and experimentation the requisite beam characteristics, at the workpiece, were determined then utilised to design the Laser Manufacturing System. The design of sub-system elements was achieved by a combination of experimentation and simulation which benefited from a comprehensive set of software tools. By linking these tools the physical processes in the laser - electron processes in the plasma, the history of photons in the resonator, etc. - can be related, in a detailed model, to the heating mechanisms in the workpiece.
Probabilistic Simulation of Multi-Scale Composite Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2012-01-01
A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.
Design and application of process control charting methodologies to gamma irradiation practices
NASA Astrophysics Data System (ADS)
Saylor, M. C.; Connaghan, J. P.; Yeadon, S. C.; Herring, C. M.; Jordan, T. M.
2002-12-01
The relationship between the contract irradiation facility and the customer has historically been based upon a "PASS/FAIL" approach with little or no quality metrics used to gage the control of the irradiation process. Application of process control charts, designed in coordination with mathematical simulation of routine radiation processing, can provide a basis for understanding irradiation events. By using tools that simulate the physical rules associated with the irradiation process, end-users can explore process-related boundaries and the effects of process changes. Consequently, the relationship between contractor and customer can evolve based on the derived knowledge. The resulting level of mutual understanding of the irradiation process and its resultant control benefits both the customer and contract operation, and provides necessary assurances to regulators. In this article we examine the complementary nature of theoretical (point kernel) and experimental (dosimetric) process evaluation, and the resulting by-product of improved understanding, communication and control generated through the implementation of effective process control charting strategies.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit
2016-07-01
We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Jiang, Yuhui; Shang, Yixuan; Yu, Shuyao; Liu, Jianguo
2018-01-01
Hexachlorobenzene (HCB) contamination of soils remains a significant environmental challenge all over the world. Reductive stabilization is a developing technology that can decompose the HCB with a dechlorination process. A nanometallic Al/CaO (n-Al/CaO) dispersion mixture was developed utilizing ball-milling technology in this study. The dechlorination efficiency of HCB in contaminated soils by the n-Al/CaO grinding treatment was evaluated. Response surface methodology (RSM) was employed to investigate the effects of three variables (soil moisture content, n-Al/CaO dosage and grinding time) and the interactions between these variables under the Box-Behnken Design (BBD). A high regression coefficient value (R2 = 0.9807) and low p value (<0.0001) of the quadratic model indicated that the model was accurate in predicting the experimental results. The optimal soil moisture content, n-Al/CaO dosage, and grinding time were found to be 7% (m/m), 17.7% (m/m), and 24 h, respectively, in the experimental ranges and levels. Under optimal conditions, the dechlorination efficiency was 80%. The intermediate product analysis indicated that dechlorination was the process by stepwise loss of chloride atoms. The main pathway observed within 24 h was HCB → pentachlorobenzene (PeCB) → 1,2,3,4-tetrachlorobenzene (TeCB) and 1,2,4,5-TeCB. The results indicated that the moderate soil moisture content was crucial for the hydrodechlorination of HCB. A probable mechanism was proposed wherein water acted like a hydrogen donor and promoted the hydrodechlorination process. The potential application of n-Al/CaO is an environmentally-friendly and cost-effective option for decontamination of HCB-contaminated soils. PMID:29702570
Arulmathi, P; Elangovan, G
2016-11-01
Ethanol production from sugarcane molasses yields large volume of highly colored spent wash as effluent. This color is imparted by the recalcitrant melanoidin pigment produced due to the Maillard reaction. In the present work, decolourization of melanoidin was carried out using activated carbon prepared from pepper stem (Piper nigrum). The interaction effect between parameters were studied by response surface methodology using central composite design and maximum decolourization of 75 % was obtained at pH 7.5, Melanoidin concentration of 32.5 mg l-1 with 1.63 g 100ml-1 of adsorbent for 2hr 75min. Artificial neural networks was also used to optimize the process parameters, giving 74 % decolourization for the same parameters. The Langmuir and Freundich isotherms were applied for describing the biosorption equilibrium. The process was represented by the Langmuir isotherm with a correlation coefficient of 0.94. The first-order, second-order models were implemented for demonstrating the biosorption mechanism and, as a result, Pseudo second order model kinetics fitted best to the experimental data. The estimated enthalpy change (DH) and entropy change (DS) of adsorption were 32.195 kJ mol-1 and 115.44 J mol-1 K which indicates that the adsorption of melanoidin was an endothermic process. Continuous adsorption studies were conducted under optimized condition. The breakthrough curve analysis was determined using the experimental data obtained from continuous adsorption. Continuous column studies gave a breakthrough at 182 mins and 176 ml. It was concluded that column packed with Piper nigrum based activated carbon can be used to remove color from distillery spent wash.
Jiang, Yuhui; Shang, Yixuan; Yu, Shuyao; Liu, Jianguo
2018-04-27
Hexachlorobenzene (HCB) contamination of soils remains a significant environmental challenge all over the world. Reductive stabilization is a developing technology that can decompose the HCB with a dechlorination process. A nanometallic Al/CaO (n-Al/CaO) dispersion mixture was developed utilizing ball-milling technology in this study. The dechlorination efficiency of HCB in contaminated soils by the n-Al/CaO grinding treatment was evaluated. Response surface methodology (RSM) was employed to investigate the effects of three variables (soil moisture content, n-Al/CaO dosage and grinding time) and the interactions between these variables under the Box-Behnken Design (BBD). A high regression coefficient value ( R ² = 0.9807) and low p value (<0.0001) of the quadratic model indicated that the model was accurate in predicting the experimental results. The optimal soil moisture content, n-Al/CaO dosage, and grinding time were found to be 7% (m/m), 17.7% (m/m), and 24 h, respectively, in the experimental ranges and levels. Under optimal conditions, the dechlorination efficiency was 80%. The intermediate product analysis indicated that dechlorination was the process by stepwise loss of chloride atoms. The main pathway observed within 24 h was HCB → pentachlorobenzene (PeCB) → 1,2,3,4-tetrachlorobenzene (TeCB) and 1,2,4,5-TeCB. The results indicated that the moderate soil moisture content was crucial for the hydrodechlorination of HCB. A probable mechanism was proposed wherein water acted like a hydrogen donor and promoted the hydrodechlorination process. The potential application of n-Al/CaO is an environmentally-friendly and cost-effective option for decontamination of HCB-contaminated soils.
Jorge, Aguirre Joya; Heliodoro, De La Garza Toledo; Alejandro, Zugasti Cruz; Ruth, Belmares Cerda; Noé, Aguilar Cristóbal
2013-06-01
To extract, quantify, and evaluate the phenolic content in Opuntia ficus-indica skin for their antioxidant capacity with three different methods (ABTS, DPPH, and lipid oxidation) and to optimize the extraction conditions (time, temperature and ethanol concentration) in a reflux system. The extraction process was done using a reflux system. A San Cristobal II experimental design with three variables and three levels was used. The variables evaluated were time of extraction (h), concentration of ethanol (%, v/v) and temperature (°C). The extraction process was optimized using a response surface methodology. It was observed that at higher temperature more phenolic compounds were extracted, but the antioxidant capacity was decreased. The optimum conditions for phenolic compounds extraction and antioxidant capacity mixing the three methods were as follows: 45% of ethanol, 80 °C and 2 hours of extraction. Values obtained in our results are little higher that other previously reported. It can be concluded the by-products of Opuntia ficus-indica represent a good source of natural antioxidants with possible applications in food, cosmetics or drugs industries.
Online intelligent controllers for an enzyme recovery plant: design methodology and performance.
Leite, M S; Fujiki, T L; Silva, F V; Fileti, A M F
2010-12-27
This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity.
Electrochemical degradation and mineralization of glyphosate herbicide.
Tran, Nam; Drogui, Patrick; Doan, Tuan Linh; Le, Thanh Son; Nguyen, Hoai Chau
2017-12-01
The presence of herbicide is a concern for both human and ecological health. Glyphosate is occasionally detected as water contaminants in agriculture areas where the herbicide is used extensively. The removal of glyphosate in synthetic solution using advanced oxidation process is a possible approach for remediation of contaminated waters. The ability of electrochemical oxidation for the degradation and mineralization of glyphosate herbicide was investigated using Ti/PbO 2 anode. The current intensity, treatment time, initial concentration and pH of solution are the influent parameters on the degradation efficiency. An experimental design methodology was applied to determine the optimal condition (in terms of cost/effectiveness) based on response surface methodology. Glyphosate concentration (C 0 = 16.9 mg L -1 ) decreased up to 0.6 mg L -1 when the optimal conditions were imposed (current intensity of 4.77 A and treatment time of 173 min). The removal efficiencies of glyphosate and total organic carbon were 95 ± 16% and 90.31%, respectively. This work demonstrates that electrochemical oxidation is a promising process for degradation and mineralization of glyphosate.
Online Intelligent Controllers for an Enzyme Recovery Plant: Design Methodology and Performance
Leite, M. S.; Fujiki, T. L.; Silva, F. V.; Fileti, A. M. F.
2010-01-01
This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity. PMID:21234106
Poussin, Carine; Mathis, Carole; Alexopoulos, Leonidas G; Messinis, Dimitris E; Dulize, Rémi H J; Belcastro, Vincenzo; Melas, Ioannis N; Sakellaropoulos, Theodore; Rhrissorrakrai, Kahn; Bilal, Erhan; Meyer, Pablo; Talikka, Marja; Boué, Stéphanie; Norel, Raquel; Rice, John J; Stolovitzky, Gustavo; Ivanov, Nikolai V; Peitsch, Manuel C; Hoeng, Julia
2014-01-01
The biological responses to external cues such as drugs, chemicals, viruses and hormones, is an essential question in biomedicine and in the field of toxicology, and cannot be easily studied in humans. Thus, biomedical research has continuously relied on animal models for studying the impact of these compounds and attempted to ‘translate’ the results to humans. In this context, the SBV IMPROVER (Systems Biology Verification for Industrial Methodology for PROcess VErification in Research) collaborative initiative, which uses crowd-sourcing techniques to address fundamental questions in systems biology, invited scientists to deploy their own computational methodologies to make predictions on species translatability. A multi-layer systems biology dataset was generated that was comprised of phosphoproteomics, transcriptomics and cytokine data derived from normal human (NHBE) and rat (NRBE) bronchial epithelial cells exposed in parallel to more than 50 different stimuli under identical conditions. The present manuscript describes in detail the experimental settings, generation, processing and quality control analysis of the multi-layer omics dataset accessible in public repositories for further intra- and inter-species translation studies. PMID:25977767
Poussin, Carine; Mathis, Carole; Alexopoulos, Leonidas G; Messinis, Dimitris E; Dulize, Rémi H J; Belcastro, Vincenzo; Melas, Ioannis N; Sakellaropoulos, Theodore; Rhrissorrakrai, Kahn; Bilal, Erhan; Meyer, Pablo; Talikka, Marja; Boué, Stéphanie; Norel, Raquel; Rice, John J; Stolovitzky, Gustavo; Ivanov, Nikolai V; Peitsch, Manuel C; Hoeng, Julia
2014-01-01
The biological responses to external cues such as drugs, chemicals, viruses and hormones, is an essential question in biomedicine and in the field of toxicology, and cannot be easily studied in humans. Thus, biomedical research has continuously relied on animal models for studying the impact of these compounds and attempted to 'translate' the results to humans. In this context, the SBV IMPROVER (Systems Biology Verification for Industrial Methodology for PROcess VErification in Research) collaborative initiative, which uses crowd-sourcing techniques to address fundamental questions in systems biology, invited scientists to deploy their own computational methodologies to make predictions on species translatability. A multi-layer systems biology dataset was generated that was comprised of phosphoproteomics, transcriptomics and cytokine data derived from normal human (NHBE) and rat (NRBE) bronchial epithelial cells exposed in parallel to more than 50 different stimuli under identical conditions. The present manuscript describes in detail the experimental settings, generation, processing and quality control analysis of the multi-layer omics dataset accessible in public repositories for further intra- and inter-species translation studies.
NASA Technical Reports Server (NTRS)
Starnes, James H., Jr.; Newman, James C., Jr.; Harris, Charles E.; Piascik, Robert S.; Young, Richard D.; Rose, Cheryl A.
2003-01-01
Analysis methodologies for predicting fatigue-crack growth from rivet holes in panels subjected to cyclic loads and for predicting the residual strength of aluminum fuselage structures with cracks and subjected to combined internal pressure and mechanical loads are described. The fatigue-crack growth analysis methodology is based on small-crack theory and a plasticity induced crack-closure model, and the effect of a corrosive environment on crack-growth rate is included. The residual strength analysis methodology is based on the critical crack-tip-opening-angle fracture criterion that characterizes the fracture behavior of a material of interest, and a geometric and material nonlinear finite element shell analysis code that performs the structural analysis of the fuselage structure of interest. The methodologies have been verified experimentally for structures ranging from laboratory coupons to full-scale structural components. Analytical and experimental results based on these methodologies are described and compared for laboratory coupons and flat panels, small-scale pressurized shells, and full-scale curved stiffened panels. The residual strength analysis methodology is sufficiently general to include the effects of multiple-site damage on structural behavior.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Field Monitoring of Experimental Hot Mix Asphalt Projects Placed in Massachusetts
DOT National Transportation Integrated Search
2017-06-30
Since 2000, Massachusetts has been involved with numerous field trials of experimental hot mix asphalt mixtures. These experimental mixtures included several pilot projects using the Superpave mixture design methodology, utilization of warm mix aspha...
Stalking Higher Energy Conformers on the Potential Energy Surface of Charged Species.
Brites, Vincent; Cimas, Alvaro; Spezia, Riccardo; Sieffert, Nicolas; Lisy, James M; Gaigeot, Marie-Pierre
2015-03-10
Combined theoretical DFT-MD and RRKM methodologies and experimental spectroscopic infrared predissociation (IRPD) strategies to map potential energy surfaces (PES) of complex ionic clusters are presented, providing lowest and high energy conformers, thresholds to isomerization, and cluster formation pathways. We believe this association not only represents a significant advance in the field of mapping minima and transition states on the PES but also directly measures dynamical pathways for the formation of structural conformers and isomers. Pathways are unraveled over picosecond (DFT-MD) and microsecond (RRKM) time scales while changing the amount of internal energy is experimentally achieved by changing the loss channel for the IRPD measurements, thus directly probing different kinetic and isomerization pathways. Demonstration is provided for Li(+)(H2O)3,4 ionic clusters. Nonstatistical formation of these ionic clusters by both direct and cascade processes, involving isomerization processes that can lead to trapping of high energy conformers along the paths due to evaporative cooling, has been unraveled.
Box-Behnken statistical design to optimize thermal performance of energy storage systems
NASA Astrophysics Data System (ADS)
Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid
2018-05-01
Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).
The Concept Maps as a Didactic Resource Tool of Meaningful Learning in Astronomy Themes
NASA Astrophysics Data System (ADS)
Silveira, Felipa Pacífico Ribeiro de Assis; Mendonça, Conceição Aparecida Soares
2015-07-01
This article presents the results of an investigation that sought to understand the performance of the conceptual map (MC) as a teaching resource facilitator of meaningful learning of scientific concepts on astronomical themes, developed with elementary school students. The methodology employed to obtain and process the data was based on a quantitative and qualitative approach. On the quantitative level we designed a quasi-experimental research with a control group that did not use the MC and an experimental group that used the MC, both being evaluated in the beginning and end of the process. In this case, the performance of both groups is displayed in a descriptive and analytical study. In the qualitative approach, the MCs were interpreted using the structuring and assigned meanings shared by the student during his/her presentation. The results demonstrated through the improvement of qualifications that the MC made a difference in conceptual learning and in certain skills revealed by learning indicators.
Effect of carbon dioxide on the thermal degradation of lignocellulosic biomass.
Kwon, Eilhann E; Jeon, Eui-Chan; Castaldi, Marco J; Jeon, Young Jae
2013-09-17
Using biomass as a renewable energy source via currently available thermochemical processes (i.e., pyrolysis and gasification) is environmentally advantageous owing to its intrinsic carbon neutrality. Developing methodologies to enhance the thermal efficiency of these proven technologies is therefore imperative. This study aimed to investigate the use of CO2 as a reaction medium to increase not only thermal efficiency but also environmental benefit. The influence of CO2 on thermochemical processes at a fundamental level was experimentally validated with the main constituents of biomass (i.e., cellulose and xylan) to avoid complexities arising from the heterogeneous matrix of biomass. For instance, gaseous products including H2, CH4, and CO were substantially enhanced in the presence of CO2 because CO2 expedited thermal cracking behavior (i.e., 200-1000%). This behavior was then universally observed in our case study with real biomass (i.e., corn stover) during pyrolysis and steam gasification. However, further study is urgently needed to optimize these experimental findings.
NASA Astrophysics Data System (ADS)
Nassiri, Ali; Vivek, Anupam; Abke, Tim; Liu, Bert; Lee, Taeseon; Daehn, Glenn
2017-06-01
Numerical simulations of high-velocity impact welding are extremely challenging due to the coupled physics and highly dynamic nature of the process. Thus, conventional mesh-based numerical methodologies are not able to accurately model the process owing to the excessive mesh distortion close to the interface of two welded materials. A simulation platform was developed using smoothed particle hydrodynamics, implemented in a parallel architecture on a supercomputer. Then, the numerical simulations were compared to experimental tests conducted by vaporizing foil actuator welding. The close correspondence of the experiment and modeling in terms of interface characteristics allows the prediction of local temperature and strain distributions, which are not easily measured.
Photonic crystal geometry for organic solar cells.
Ko, Doo-Hyun; Tumbleston, John R; Zhang, Lei; Williams, Stuart; DeSimone, Joseph M; Lopez, Rene; Samulski, Edward T
2009-07-01
We report organic solar cells with a photonic crystal nanostructure embossed in the photoactive bulk heterojunction layer, a topography that exhibits a 3-fold enhancement of the absorption in specific regions of the solar spectrum in part through multiple excitation resonances. The photonic crystal geometry is fabricated using a materials-agnostic process called PRINT wherein highly ordered arrays of nanoscale features are readily made in a single processing step over wide areas (approximately 4 cm(2)) that is scalable. We show efficiency improvements of approximately 70% that result not only from greater absorption, but also from electrical enhancements. The methodology is generally applicable to organic solar cells and the experimental findings reported in our manuscript corroborate theoretical expectations.
NASA Astrophysics Data System (ADS)
Tsibidis, George D.; Mimidis, Alexandros; Skoulas, Evangelos; Kirner, Sabrina V.; Krüger, Jörg; Bonse, Jörn; Stratakis, Emmanuel
2018-01-01
We investigate the periodic structure formation upon intense femtosecond pulsed irradiation of chrome steel (100Cr6) for linearly polarised laser beams. The underlying physical mechanism of the laser-induced periodic structures is explored, their spatial frequency is calculated and theoretical results are compared with experimental observations. The proposed theoretical model comprises estimations of electron excitation, heat transfer, relaxation processes, and hydrodynamics-related mass transport. Simulations describe the sequential formation of sub-wavelength ripples and supra-wavelength grooves. In addition, the influence of the laser wavelength on the periodicity of the structures is discussed. The proposed theoretical investigation offers a systematic methodology towards laser processing of steel surfaces with important applications.
A quality by design approach to scale-up of high-shear wet granulation process.
Pandey, Preetanshu; Badawy, Sherif
2016-01-01
High-shear wet granulation is a complex process that in turn makes scale-up a challenging task. Scale-up of high-shear wet granulation process has been studied extensively in the past with various different methodologies being proposed in the literature. This review article discusses existing scale-up principles and categorizes the various approaches into two main scale-up strategies - parameter-based and attribute-based. With the advent of quality by design (QbD) principle in drug product development process, an increased emphasis toward the latter approach may be needed to ensure product robustness. In practice, a combination of both scale-up strategies is often utilized. In a QbD paradigm, there is also a need for an increased fundamental and mechanistic understanding of the process. This can be achieved either by increased experimentation that comes at higher costs, or by using modeling techniques, that are also discussed as part of this review.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
NASA Astrophysics Data System (ADS)
Yasin, Sohail; Curti, Massimo; Behary, Nemeshwaree; Perwuelz, Anne; Giraud, Stephane; Rovero, Giorgio; Guan, Jinping; Chen, Guoqiang
The n-methylol dimethyl phosphono propionamide (MDPA) flame retardant compounds are predominantly used for cotton fabric treatments with trimethylol melamine (TMM) to obtain better crosslinking and enhanced flame retardant properties. Nevertheless, such treatments are associated with a toxic issue of cancer-causing formaldehyde release. An eco-friendly finishing was used to get formaldehyde-free fixation of flame retardant to the cotton fabric. Citric acid as a crosslinking agent along with the sodium hypophosphite as a catalyst in the treatment was utilized. The process parameters of the treatment were enhanced for optimized flame retardant properties, in addition, low mechanical loss to the fabric by response surface methodology using Box-Behnken statistical design experiment methodology was achieved. The effects of concentrations on the fabric’s properties (flame retardancy and mechanical properties) were evaluated. The regression equations for the prediction of concentrations and mechanical properties of the fabric were also obtained for the eco-friendly treatment. The R-squared values of all the responses were above 0.95 for the reagents used, indicating the degree of relationship between the predicted values by the Box-Behnken design and the actual experimental results. It was also found that the concentration parameters (crosslinking reagents and catalysts) in the treatment formulation have a prime role in the overall performance of flame retardant cotton fabrics.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Mosley, Garrett L; Nguyen, Phuong; Wu, Benjamin M; Kamei, Daniel T
2016-08-07
The lateral-flow immunoassay (LFA) is a well-established diagnostic technology that has recently seen significant advancements due in part to the rapidly expanding fields of paper diagnostics and paper-fluidics. As LFA-based diagnostics become more complex, it becomes increasingly important to quantitatively determine important parameters during the design and evaluation process. However, current experimental methods for determining these parameters have certain limitations when applied to LFA systems. In this work, we describe our novel methods of combining paper and radioactive measurements to determine nanoprobe molarity, the number of antibodies per nanoprobe, and the forward and reverse rate constants for nanoprobe binding to immobilized target on the LFA test line. Using a model LFA system that detects for the presence of the protein transferrin (Tf), we demonstrate the application of our methods, which involve quantitative experimentation and mathematical modeling. We also compare the results of our rate constant experiments with traditional experiments to demonstrate how our methods more appropriately capture the influence of the LFA environment on the binding interaction. Our novel experimental approaches can therefore more efficiently guide the research process for LFA design, leading to more rapid advancement of the field of paper-based diagnostics.
Measuring coral reef community metabolism using new benthic chamber technology
Yates, K.K.; Halley, R.B.
2003-01-01
Accurate measurement of coral reef community metabolism is a necessity for process monitoring and in situ experimentation on coral reef health. Traditional methodologies used for these measurements are effective but limited by location and scale constraints. We present field trial results for a new benthic chamber system called the Submersible Habitat for Analyzing Reef Quality (SHARQ). This large, portable incubation system enables in situ measurement and experimentation on community- scale metabolism. Rates of photosynthesis, respiration, and calcification were measured using the SHARQ for a variety of coral reef substrate types on the reef flat of South Molokai, Hawaii, and in Biscayne National Park, Florida. Values for daily gross production, 24-h respiration, and net calcification ranged from 0.26 to 6.45 g O2 m-2 day-1, 1.96 to 8.10 g O2 m-2 24 h-1, and 0.02 to 2.0 g CaCO3 m -2 day-1, respectively, for all substrate types. Field trials indicate that the SHARQ incubation chamber is an effective tool for in situ isolation of a water mass over a variety of benthic substrate types for process monitoring, experimentation, and other applications.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-11-04
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-01-01
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM. PMID:28774019
An analysis of post-event processing in social anxiety disorder.
Brozovich, Faith; Heimberg, Richard G
2008-07-01
Research has demonstrated that self-focused thoughts and negative affect have a reciprocal relationship [Mor, N., Winquist, J. (2002). Self-focused attention and negative affect: A meta-analysis. Psychological Bulletin, 128, 638-662]. In the anxiety disorder literature, post-event processing has emerged as a specific construction of repetitive self-focused thoughts that pertain to social anxiety disorder. Post-event processing can be defined as an individual's repeated consideration and potential reconstruction of his performance following a social situation. Post-event processing can also occur when an individual anticipates a social or performance event and begins to brood about other, past social experiences. The present review examined the post-event processing literature in an attempt to organize and highlight the significant results. The methodologies employed to study post-event processing have included self-report measures, daily diaries, social or performance situations created in the laboratory, and experimental manipulations of post-event processing or anticipation of an upcoming event. Directions for future research on post-event processing are discussed.
Narrative inquiry: Locating Aboriginal epistemology in a relational methodology.
Barton, Sylvia S
2004-03-01
This methodology utilizes narrative analysis and the elicitation of life stories as understood through dimensions of interaction, continuity, and situation. It is congruent with Aboriginal epistemology formulated by oral narratives through representation, connection, storytelling and art. Needed for culturally competent scholarship is an experience of research whereby inquiry into epiphanies, ritual, routines, metaphors and everyday experience creates a process of reflexive thinking for multiple ways of knowing. Based on the sharing of perspectives, narrative inquiry allows for experimentation into creating new forms of knowledge by contextualizing diabetes from the experience of a researcher overlapped with experiences of participants--a reflective practice in itself. The aim of this paper is to present narrative inquiry as a relational methodology and to analyse critically its appropriateness as an innovative research approach for exploring Aboriginal people's experience living with diabetes. Narrative inquiry represents an alternative culture of research for nursing science to generate understanding and explanation of Aboriginal people's 'diabetic self' stories, and to coax open a window for co-constructing a narrative about diabetes as a chronic illness. The ability to adapt a methodology for use in a cultural context, preserve the perspectives of Aboriginal peoples, maintain the holistic nature of social problems, and value co-participation in respectful ways are strengths of an inquiry partial to a responsive and embodied scholarship.
Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto
2007-01-01
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
2013-01-01
In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm × 750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706
Data Quality Assurance for Supersonic Jet Noise Measurements
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Henderson, Brenda S.; Bridges, James E.
2010-01-01
The noise created by a supersonic aircraft is a primary concern in the design of future high-speed planes. The jet noise reduction technologies required on these aircraft will be developed using scale-models mounted to experimental jet rigs designed to simulate the exhaust gases from a full-scale jet engine. The jet noise data collected in these experiments must accurately predict the noise levels produced by the full-scale hardware in order to be a useful development tool. A methodology has been adopted at the NASA Glenn Research Center s Aero-Acoustic Propulsion Laboratory to insure the quality of the supersonic jet noise data acquired from the facility s High Flow Jet Exit Rig so that it can be used to develop future nozzle technologies that reduce supersonic jet noise. The methodology relies on mitigating extraneous noise sources, examining the impact of measurement location on the acoustic results, and investigating the facility independence of the measurements. The methodology is documented here as a basis for validating future improvements and its limitations are noted so that they do not affect the data analysis. Maintaining a high quality jet noise laboratory is an ongoing process. By carefully examining the data produced and continually following this methodology, data quality can be maintained and improved over time.
Experimental Design of a UCAV-Based High-Energy Laser Weapon
2016-12-01
propagation. The Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their... Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their effect on the...73 A. DESIGN OF EXPERIMENTS METHODOLOGY .............................73 B. OPERATIONAL CONCEPT
Axial Magneto-Inductive Effect in Soft Magnetic Microfibers, Test Methodology, and Experiments
2016-03-24
NUWC-NPT Technical Report 12,186 24 March 2016 Axial Magneto-Inductive Effect in Soft Magnetic Microfibers, Test Methodology , and...Microfibers, Test Methodology , and Experiments 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Anthony B...5 4 MEASUREMENTS AND EXPERIMENTAL APPARATUS ...........................................9 5 SAMPLE PREPARATION
A theoretical and experimental investigation of propeller performance methodologies
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Gregorek, G. M.; Mikkelson, D. C.
1980-01-01
This paper briefly covers aspects related to propeller performance by means of a review of propeller methodologies; presentation of wind tunnel propeller performance data taken in the NASA Lewis Research Center 10 x 10 wind tunnel; discussion of the predominent limitations of existing propeller performance methodologies; and a brief review of airfoil developments appropriate for propeller applications.
The Question of Education Science: "Experiment"ism Versus "Experimental"ism
ERIC Educational Resources Information Center
Howe, Kenneth R.
2005-01-01
The ascendant view in the current debate about education science -- experimentism -- is a reassertion of the randomized experiment as the methodological gold standard. Advocates of this view have ignored, not answered, long-standing criticisms of the randomized experiment: its frequent impracticality, its lack of external validity, its confinement…
Graphene Quantum Capacitors for High Frequency Tunable Analog Applications.
Moldovan, Clara F; Vitale, Wolfgang A; Sharma, Pankaj; Tamagnone, Michele; Mosig, Juan R; Ionescu, Adrian M
2016-08-10
Graphene quantum capacitors (GQC) are demonstrated to be enablers of radio-frequency (RF) functions through voltage-tuning of their capacitance. We show that GQC complements MEMS and MOSFETs in terms of performance for high frequency analog applications and tunability. We propose a CMOS compatible fabrication process and report the first experimental assessment of their performance at microwaves frequencies (up to 10 GHz), demonstrating experimental GQCs in the pF range with a tuning ratio of 1.34:1 within 1.25 V, and Q-factors up to 12 at 1 GHz. The figures of merit of graphene variable capacitors are studied in detail from 150 to 350 K. Furthermore, we describe a systematic, graphene specific approach to optimize their performance and predict the figures of merit achieved if such a methodology is applied.
NASA Astrophysics Data System (ADS)
Deepu, M. J.; Farivar, H.; Prahl, U.; Phanikumar, G.
2017-04-01
Dual phase steels are versatile advanced high strength steels that are being used for sheet metal applications in automotive industry. It also has the potential for application in bulk components like gear. The inter-critical annealing in dual phase steels is one of the crucial steps that determine the mechanical properties of the material. Selection of the process parameters for inter-critical annealing, in particular, the inter-critical annealing temperature and time is important as it plays a major role in determining the volume fractions of ferrite and martensite, which in turn determines the mechanical properties. Selection of these process parameters to obtain a particular required mechanical property requires large number of experimental trials. Simulation of microstructure evolution and virtual compression/tensile testing can help in reducing the number of such experimental trials. In the present work, phase field modeling implemented in the commercial software Micress® is used to predict the microstructure evolution during inter-critical annealing. Virtual compression tests are performed on the simulated microstructure using finite element method implemented in the commercial software, to obtain the effective flow curve of the macroscopic material. The flow curves obtained by simulation are experimentally validated with physical simulation in Gleeble® and compared with that obtained using linear rule of mixture. The methodology could be used in determining the inter-critical annealing process parameters required for achieving a particular flow curve.
Lee, Seung-Mok; Kim, Young-Gyu; Cho, Il-Hyoung
2005-01-01
Optimal operating conditions in order to treat dyeing wastewater were investigated by using the factorial design and responses surface methodology (RSM). The experiment was statistically designed and carried out according to a 22 full factorial design with four factorial points, three center points, and four axial points. Then, the linear and nonlinear regression was applied on the data by using SAS package software. The independent variables were TiO2 dosage, H2O2 concentration and total organic carbon (TOC) removal efficiency of dyeing wastewater was dependent variable. From the factorial design and responses surface methodology (RSM), maximum removal efficiency (85%) of dyeing wastewater was obtained at TiO2 dosage (1.82 gL(-1)), H2O2 concentration (980 mgL(-1)) for oxidation reaction (20 min).
Accuracy in planar cutting of bones: an ISO-based evaluation.
Cartiaux, Olivier; Paul, Laurent; Docquier, Pierre-Louis; Francq, Bernard G; Raucent, Benoît; Dombre, Etienne; Banse, Xavier
2009-03-01
Computer- and robot-assisted technologies are capable of improving the accuracy of planar cutting in orthopaedic surgery. This study is a first step toward formulating and validating a new evaluation methodology for planar bone cutting, based on the standards from the International Organization for Standardization. Our experimental test bed consisted of a purely geometrical model of the cutting process around a simulated bone. Cuts were performed at three levels of surgical assistance: unassisted, computer-assisted and robot-assisted. We measured three parameters of the standard ISO1101:2004: flatness, parallelism and location of the cut plane. The location was the most relevant parameter for assessing cutting errors. The three levels of assistance were easily distinguished using the location parameter. Our ISO methodology employs the location to obtain all information about translational and rotational cutting errors. Location may be used on any osseous structure to compare the performance of existing assistance technologies.
Lewis, George K; Lewis, George K; Olbricht, William
2008-01-01
This paper explains the circuitry and signal processing to perform electrical impedance spectroscopy on piezoelectric materials and ultrasound transducers. Here, we measure and compare the impedance spectra of 2−5 MHz piezoelectrics, but the methodology applies for 700 kHz–20 MHz ultrasonic devices as well. Using a 12 ns wide 5 volt pulsing circuit as an impulse, we determine the electrical impedance curves experimentally using Ohm's law and fast Fourier transform (FFT), and compare results with mathematical models. The method allows for rapid impedance measurement for a range of frequencies using a narrow input pulse, digital oscilloscope and FFT techniques. The technique compares well to current methodologies such as network and impedance analyzers while providing additional versatility in the electrical impedance measurement. The technique is theoretically simple, easy to implement and completed with ordinary laboratory instrumentation for minimal cost. PMID:19081773
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Hayakawa, Yuichi; Nex, Francesco; Remondino, Fabio; Tarolli, Paolo
2018-04-01
The number of scientific studies that consider possible applications of remotely piloted aircraft systems (RPASs) for the management of natural hazards effects and the identification of occurred damages strongly increased in the last decade. Nowadays, in the scientific community, the use of these systems is not a novelty, but a deeper analysis of the literature shows a lack of codified complex methodologies that can be used not only for scientific experiments but also for normal codified emergency operations. RPASs can acquire on-demand ultra-high-resolution images that can be used for the identification of active processes such as landslides or volcanic activities but can also define the effects of earthquakes, wildfires and floods. In this paper, we present a review of published literature that describes experimental methodologies developed for the study and monitoring of natural hazards.
SociAL Sensor Analytics: Measuring Phenomenology at Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Dowling, Chase P.; Rose, Stuart J.
The objective of this paper is to present a system for interrogating immense social media streams through analytical methodologies that characterize topics and events critical to tactical and strategic planning. First, we propose a conceptual framework for interpreting social media as a sensor network. Time-series models and topic clustering algorithms are used to implement this concept into a functioning analytical system. Next, we address two scientific challenges: 1) to understand, quantify, and baseline phenomenology of social media at scale, and 2) to develop analytical methodologies to detect and investigate events of interest. This paper then documents computational methods and reportsmore » experimental findings that address these challenges. Ultimately, the ability to process billions of social media posts per week over a period of years enables the identification of patterns and predictors of tactical and strategic concerns at an unprecedented rate through SociAL Sensor Analytics (SALSA).« less
Construct Validity: Advances in Theory and Methodology
Strauss, Milton E.; Smith, Gregory T.
2008-01-01
Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review five recent advances in validation theory and methodology of importance for clinical researchers. These are: the emergence of nonjustificationist philosophy of science; an increasing appreciation for theory and the need for informative tests of construct validity; valid construct representation in experimental psychopathology; the need to avoid representing multidimensional constructs with a single score; and the emergence of effective new statistical tools for the evaluation of convergent and discriminant validity. PMID:19086835
Mapping Base Modifications in DNA by Transverse-Current Sequencing
NASA Astrophysics Data System (ADS)
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2018-02-01
Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.
Lee, Kian Mun; Hamid, Sharifah Bee Abd
2015-01-19
The performance of advance photocatalytic degradation of 4-chlorophenoxyacetic acid (4-CPA) strongly depends on photocatalyst dosage, initial concentration and initial pH. In the present study, a simple response surface methodology (RSM) was applied to investigate the interaction between these three independent factors. Thus, the photocatalytic degradation of 4-CPA in aqueous medium assisted by ultraviolet-active ZnO photocatalyst was systematically investigated. This study aims to determine the optimum processing parameters to maximize 4-CPA degradation. Based on the results obtained, it was found that a maximum of 91% of 4-CPA was successfully degraded under optimal conditions (0.02 g ZnO dosage, 20.00 mg/L of 4-CPA and pH 7.71). All the experimental data showed good agreement with the predicted results obtained from statistical analysis.
Shahbaz Mohammadi, Hamid; Mostafavi, Seyede Samaneh; Soleimani, Saeideh; Bozorgian, Sajad; Pooraskari, Maryam; Kianmehr, Anvarsadat
2015-04-01
Oxidoreductases are an important family of enzymes that are used in many biotechnological processes. An experimental design was applied to optimize partition and purification of two recombinant oxidoreductases, glucose dehydrogenase (GDH) from Bacillus subtilis and d-galactose dehydrogenase (GalDH) from Pseudomonas fluorescens AK92 in aqueous two-phase systems (ATPS). Response surface methodology (RSM) with a central composite rotatable design (CCRD) was performed to optimize critical factors like polyethylene glycol (PEG) concentration, concentration of salt and pH value. The best partitioning conditions was achieved in an ATPS composed of 12% PEG-6000, 15% K2HPO4 with pH 7.5 at 25°C, which ensured partition coefficient (KE) of 66.6 and 45.7 for GDH and GalDH, respectively. Under these experimental conditions, the activity of GDH and GalDH was 569.5U/ml and 673.7U/ml, respectively. It was found that these enzymes preferentially partitioned into the top PEG-rich phase and appeared as single bands on SDS-PAGE gel. Meanwhile the validity of the response model was confirmed by a good agreement between predicted and experimental results. Collectively, according to the obtained data it can be inferred that the ATPS optimization using RSM approach can be applied for recovery and purification of any enzyme from oxidoreductase family. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mengoni, Marlène; Kayode, Oluwasegun; Sikora, Sebastien N F; Zapata-Cornelio, Fernando Y; Gregory, Diane E; Wilcox, Ruth K
2017-08-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions ( n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour ( n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20-25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens.
Effective Swimmer’s Action during the Grab Start Technique
Mourão, Luis; de Jesus, Karla; Roesler, Hélio; Machado, Leandro J.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo; Vaz, Mário A. P.
2015-01-01
The external forces applied in swimming starts have been often studied, but using direct analysis and simple interpretation data processes. This study aimed to develop a tool for vertical and horizontal force assessment based on the swimmers’ propulsive and structural forces (passive forces due to dead weight) applied during the block phase. Four methodological pathways were followed: the experimented fall of a rigid body, the swimmers’ inertia effect, the development of a mathematical model to describe the outcome of the rigid body fall and its generalization to include the effects of the inertia, and the experimental swimmers’ starting protocol analysed with the inclusion of the developed mathematical tool. The first three methodological steps resulted in the description and computation of the passive force components. At the fourth step, six well-trained swimmers performed three 15 m maximal grab start trials and three-dimensional (3D) kinetic data were obtained using a six degrees of freedom force plate. The passive force contribution to the start performance obtained from the model was subtracted from the experimental force due to the swimmers resulting in the swimmers’ active forces. As expected, the swimmers’ vertical and horizontal active forces accounted for the maximum variability contribution of the experimental forces. It was found that the active force profile for the vertical and horizontal components resembled one another. These findings should be considered in clarifying the active swimmers’ force variability and the respective geometrical profile as indicators to redefine steering strategies. PMID:25978370
Kayode, Oluwasegun; Sikora, Sebastien N. F.; Zapata-Cornelio, Fernando Y.; Gregory, Diane E.; Wilcox, Ruth K.
2017-01-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions (n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour (n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20–25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens. PMID:28879014
The impact of temporal sampling resolution on parameter inference for biological transport models.
Harrison, Jonathan U; Baker, Ruth E
2018-06-25
Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.
The work of Galileo and conformation of the experiment in physics
NASA Astrophysics Data System (ADS)
Alvarez, J. L.; Posadas, Y.
2003-02-01
It is very frequent to find comments and references to Galileo's work suggesting that he based his affirmations on a logic thought and not on observations. In this paper we present an analysis of some experiments that he realized and were unknown in the XVI and XVII centuries; in they we find a clear description of the methodology that Galileo follows in order to reach the results that he presents in his formal work, particularly in Discorsi. In contrast with the Aristotelian philosophy, in these manuscripts Galileo adopt a methodology with which he obtain great contributions for the modem conformation of the experimental method, founding so a methodology for the study of the movement. We use this analysis as an example of the difficulties that are present in the conformation of the modem experimentation and we point out the necessity to stress the importance of the scientific methodology in the teaching of physics.
Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.
Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C
2010-12-01
Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.
Escalante-Aburto, Anayansi; Ramírez-Wong, Benjamín; Torres-Chávez, Patricia Isabel; López-Cervantes, Jaime; Figueroa-Cárdenas, Juan de Dios; Barrón-Hoyos, Jesús Manuel; Morales-Rosas, Ignacio; Ponce-García, Néstor; Gutiérrez-Dorado, Roberto
2014-12-15
Extrusion is an alternative technology for the production of nixtamalized products. The aim of this study was to obtain an expanded nixtamalized snack with whole blue corn and using the extrusion process, to preserve the highest possible total anthocyanin content, intense blue/purple coloration (color b) and the highest expansion index. A central composite experimental design was used. The extrusion process factors were: feed moisture (FM, 15%-23%), calcium hydroxide concentration (CHC, 0%-0.25%) and final extruder temperature (T, 110-150 °C). The chemical and physical properties evaluated in the extrudates were moisture content (MC, %), total anthocyanins (TA, mg·kg(-1)), pH, color (L, a, b) and expansion index (EI). ANOVA and surface response methodology were applied to evaluate the effects of the extrusion factors. FM and T significantly affected the response variables. An optimization step was performed by overlaying three contour plots to predict the best combination region. The extrudates were obtained under the following optimum factors: FM (%) = 16.94, CHC (%) = 0.095 and T (°C) = 141.89. The predicted extrusion processing factors were highly accurate, yielding an expanded nixtamalized snack with 158.87 mg·kg(-1) TA (estimated: 160 mg·kg(-1)), an EI of 3.19 (estimated: 2.66), and color parameter b of -0.44 (estimated: 0.10).
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Humphreys, William M.
2006-01-01
Current processing of acoustic array data is burdened with considerable uncertainty. This study reports an original methodology that serves to demystify array results, reduce misinterpretation, and accurately quantify position and strength of acoustic sources. Traditional array results represent noise sources that are convolved with array beamform response functions, which depend on array geometry, size (with respect to source position and distributions), and frequency. The Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) method removes beamforming characteristics from output presentations. A unique linear system of equations accounts for reciprocal influence at different locations over the array survey region. It makes no assumption beyond the traditional processing assumption of statistically independent noise sources. The full rank equations are solved with a new robust iterative method. DAMAS is quantitatively validated using archival data from a variety of prior high-lift airframe component noise studies, including flap edge/cove, trailing edge, leading edge, slat, and calibration sources. Presentations are explicit and straightforward, as the noise radiated from a region of interest is determined by simply summing the mean-squared values over that region. DAMAS can fully replace existing array processing and presentations methodology in most applications. It appears to dramatically increase the value of arrays to the field of experimental acoustics.
Simultaneous excitation system for efficient guided wave structural health monitoring
NASA Astrophysics Data System (ADS)
Hua, Jiadong; Michaels, Jennifer E.; Chen, Xin; Lin, Jing
2017-10-01
Many structural health monitoring systems utilize guided wave transducer arrays for defect detection and localization. Signals are usually acquired using the ;pitch-catch; method whereby each transducer is excited in turn and the response is received by the remaining transducers. When extensive signal averaging is performed, the data acquisition process can be quite time-consuming, especially for metallic components that require a low repetition rate to allow signals to die out. Such a long data acquisition time is particularly problematic if environmental and operational conditions are changing while data are being acquired. To reduce the total data acquisition time, proposed here is a methodology whereby multiple transmitters are simultaneously triggered, and each transmitter is driven with a unique excitation. The simultaneously transmitted waves are captured by one or more receivers, and their responses are processed by dispersion-compensated filtering to extract the response from each individual transmitter. The excitation sequences are constructed by concatenating a series of chirps whose start and stop frequencies are randomly selected from a specified range. The process is optimized using a Monte-Carlo approach to select sequences with impulse-like autocorrelations and relatively flat cross-correlations. The efficacy of the proposed methodology is evaluated by several metrics and is experimentally demonstrated with sparse array imaging of simulated damage.
Zhang, Xiao-yan; Peng, Yong; Su, Zhong-rui; Chen, Qi-he; Ruan, Hui; He, Guo-qing
2013-02-01
Biotransformation of phytosterol (PS) by a newly isolated mutant Mycobacterium neoaurum ZJUVN-08 to produce androstenedione has been investigated in this paper. The parameters of the biotransformation process were optimized using fractional factorial design and response surface methodology. Androstenedione was the sole product in the fermentation broth catalyzed by the mutant M. neoaurum ZJUVN-08 strain. Results showed that molar ratio of hydroxypropyl-β-cyclodextrin (HP-β-CD) to PS and substrate concentrations were the two most significant factors affecting androstenedione production. By analyzing the statistical model of three-dimensional surface plot, the optimal process conditions were observed at 0.1 g/L inducer, pH 7.0, molar ratio of HP-β-CD to PS 1.92:1, 8.98 g/L PS, and at 120 h of incubation time. Under these conditions, the maximum androstenedione yield was 5.96 g/L and nearly the same with the non-optimized (5.99 g/L), while the maximum PS conversion rate was 94.69% which increased by 10.66% compared with the non-optimized (84.03%). The predicted optimum conditions from the mathematical model were in agreement with the verification experimental results. It is considered that response surface methodology was a powerful and efficient method to optimize the parameters of PS biotransformation process.
Zhang, Xiao-yan; Peng, Yong; Su, Zhong-rui; Chen, Qi-he; Ruan, Hui; He, Guo-qing
2013-01-01
Biotransformation of phytosterol (PS) by a newly isolated mutant Mycobacterium neoaurum ZJUVN-08 to produce androstenedione has been investigated in this paper. The parameters of the biotransformation process were optimized using fractional factorial design and response surface methodology. Androstenedione was the sole product in the fermentation broth catalyzed by the mutant M. neoaurum ZJUVN-08 strain. Results showed that molar ratio of hydroxypropyl-β-cyclodextrin (HP-β-CD) to PS and substrate concentrations were the two most significant factors affecting androstenedione production. By analyzing the statistical model of three-dimensional surface plot, the optimal process conditions were observed at 0.1 g/L inducer, pH 7.0, molar ratio of HP-β-CD to PS 1.92:1, 8.98 g/L PS, and at 120 h of incubation time. Under these conditions, the maximum androstenedione yield was 5.96 g/L and nearly the same with the non-optimized (5.99 g/L), while the maximum PS conversion rate was 94.69% which increased by 10.66% compared with the non-optimized (84.03%). The predicted optimum conditions from the mathematical model were in agreement with the verification experimental results. It is considered that response surface methodology was a powerful and efficient method to optimize the parameters of PS biotransformation process. PMID:23365012
Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed
2014-03-19
In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X₁); the flow rate (X₂); the catalyst weight (X₃); the catalyst loading (X₄) and the glycerol-water molar ratio (X₅) on the H₂ yield (Y₁) and the conversion of glycerol to gaseous products (Y₂) were explored. Using multiple regression analysis; the experimental results of the H₂ yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H₂ yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t -test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied.
Methodological quality of meta-analyses of single-case experimental studies.
Jamshidi, Laleh; Heyvaert, Mieke; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M; Moeyaert, Mariola; Beretvas, S Natasha; Onghena, Patrick; Van den Noortgate, Wim
2017-12-28
Methodological rigor is a fundamental factor in the validity and credibility of the results of a meta-analysis. Following an increasing interest in single-case experimental design (SCED) meta-analyses, the current study investigates the methodological quality of SCED meta-analyses. We assessed the methodological quality of 178 SCED meta-analyses published between 1985 and 2015 through the modified Revised-Assessment of Multiple Systematic Reviews (R-AMSTAR) checklist. The main finding of the current review is that the methodological quality of the SCED meta-analyses has increased over time, but is still low according to the R-AMSTAR checklist. A remarkable percentage of the studies (93.80% of the included SCED meta-analyses) did not even reach the midpoint score (22, on a scale of 0-44). The mean and median methodological quality scores were 15.57 and 16, respectively. Relatively high scores were observed for "providing the characteristics of the included studies" and "doing comprehensive literature search". The key areas of deficiency were "reporting an assessment of the likelihood of publication bias" and "using the methods appropriately to combine the findings of studies". Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gholikandi, Gagik Badalians; Kazemirad, Khashayar
2018-03-01
In this study, the performance of the electrochemical peroxidation (ECP) process for removing the volatile suspended solids (VSS) content of waste-activated sludge was evaluated. The Fe 2+ ions required by the process were obtained directly from iron electrodes in the system. The performance of the ECP process was investigated in various operational conditions employing a laboratory-scale pilot setup and optimized by response surface methodology (RSM). According to the results, the ECP process showed its best performance when the pH value, current density, H 2 O 2 concentration and the retention time were 3, 3.2 mA/cm 2 , 1,535 mg/L and 240 min, respectively. In these conditions, the introduced Fe 2+ concentration was approximately 500 (mg/L) and the VSS removal efficiency about 74%. Moreover, the results of the microbial characteristics of the raw and the stabilized sludge demonstrated that the ECP process is able to remove close to 99.9% of the coliforms in the raw sludge during the stabilization process. The energy consumption evaluation showed that the required energy of the ECP reactor (about 1.8-2.5 kWh (kg VSS removed) -1 ) is considerably lower than for aerobic digestion, the conventional waste-activated sludge stabilization method (about 2-3 kWh (kg VSS removed) -1 ). The RSM optimization process showed that the best operational conditions of the ECP process comply with the experimental results, and the actual and the predicted results are in good conformity with each other. This feature makes it possible to predict the introduced Fe 2+ concentrations into the system and the VSS removal efficiency of the process precisely.
Does mood influence text processing and comprehension? Evidence from an eye-movement study.
Scrimin, Sara; Mason, Lucia
2015-09-01
Previous research has indicated that mood influences cognitive processes. However, there is scarce data regarding the link between everyday emotional states and readers' text processing and comprehension. We aim to extend current research on the effects of mood induction on science text processing and comprehension, using eye-tracking methodology. We investigated whether a positive-, negative-, and neutral-induced mood influences online processing, as revealed by indices of visual behaviour during reading, and offline text comprehension, as revealed by post-test questions. We were also interested in the link between text processing and comprehension. Seventy-eight undergraduate students randomly assigned to three mood-induction conditions. Students were mood-induced by watching a video clip. They were then asked to read a scientific text while eye movements were registered. Pre- and post-reading knowledge was assessed through open-ended questions. Experimentally induced moods lead readers to process an expository text differently. Overall, students in a positive mood spent significantly longer on the text processing than students in the negative and neutral moods. Eye-movement patterns indicated more effective processing related to longer proportion of look-back fixation times in positive-induced compared with negative-induced readers. Students in a positive mood also comprehended the text better, learning more factual knowledge, compared with students in the negative group. Only for the positive-induced readers did the more purposeful second-pass reading positively predict text comprehension. New insights are given on the effects of normal mood variations and students' text processing and comprehension by the use of eye-tracking methodology. Important implications for the role of emotional states in educational settings are highlighted. © 2015 The British Psychological Society.
Efficient calculation of nuclear spin-rotation constants from auxiliary density functional theory.
Zuniga-Gutierrez, Bernardo; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso; Simon-Bastida, Patricia; Calaminici, Patrizia; Köster, Andreas M
2015-09-14
The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-cluster level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H(12)C-(12)CH-DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga-Gutierrez, Bernardo, E-mail: bzuniga.51@gmail.com; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso
The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-clustermore » level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H{sup 12}C–{sup 12}CH–DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.« less
Singh, Kunwar P; Singh, Arun K; Gupta, Shikha; Rai, Premanjali
2012-07-01
The present study aims to investigate the individual and combined effects of temperature, pH, zero-valent bimetallic nanoparticles (ZVBMNPs) dose, and chloramphenicol (CP) concentration on the reductive degradation of CP using ZVBMNPs in aqueous medium. Iron-silver ZVBMNPs were synthesized. Batch experimental data were generated using a four-factor statistical experimental design. CP reduction by ZVBMNPs was optimized using the response surface modeling (RSM) and artificial neural network-genetic algorithm (ANN-GA) approaches. The RSM and ANN methodologies were also compared for their predictive and generalization abilities using the same training and validation data set. Reductive by-products of CP were identified using liquid chromatography-mass spectrometry technique. The optimized process variables (RSM and ANN-GA approaches) yielded CP reduction capacity of 57.37 and 57.10 mg g(-1), respectively, as compared to the experimental value of 54.0 mg g(-1) with un-optimized variables. The ANN-GA and RSM methodologies yielded comparable results and helped to achieve a higher reduction (>6%) of CP by the ZVBMNPs as compared to the experimental value. The root mean squared error, relative standard error of prediction and correlation coefficient between the measured and model-predicted values of response variable were 1.34, 3.79, and 0.964 for RSM and 0.03, 0.07, and 0.999 for ANN models for the training and 1.39, 3.47, and 0.996 for RSM and 1.25, 3.11, and 0.990 for ANN models for the validation set. Predictive and generalization abilities of both the RSM and ANN models were comparable. The synthesized ZVBMNPs may be used for an efficient reductive removal of CP from the water.
Development of Pangasius steaks by improved sous-vide technology and its process optimization.
Kumari, Namita; Singh, Chongtham Baru; Kumar, Raushan; Martin Xavier, K A; Lekshmi, Manjusha; Venkateshwarlu, Gudipati; Balange, Amjad K
2016-11-01
The present study embarked on the objective of optimizing improved sous - vide processing condition for development of ready-to-cook Pangasius steaks with extended shelf-life using response surface methodology. For the development of improved sous - vide cooked product, Pangasius steaks were treated with additional hurdles in various combinations for optimization. Based on the study, suitable combination of chitosan and spices was selected which enhanced antimicrobial and oxidative stability of the product. The Box-Behnken experimental design with 15 trials per model was adopted for designing the experiment to know the effect of independent variables, namely chitosan concentration (X 1 ), cooking time (X 2 ) and cooking temperature (X 3 ) on dependent variable i.e. TBARS value (Y 1 ). From RSM generated model, the optimum condition for sous - vide processing of Pangasius steaks were 1.08% chitosan concentration, 70.93 °C of cooking temperature and 16.48 min for cooking time and predicted minimum value of multiple response optimal condition was Y = 0.855 mg MDA/Kg of fish. The high correlation coefficient (R 2 = 0.975) between the model and the experimental data showed that the model was able to efficiently predict processing condition for development of sous - vide processed Pangasius steaks. This research may help the processing industries and Pangasius fish farmer as it provides an alternative low cost technology for the proper utilization of Pangasius .
Patwardhan, Ketaki; Asgarzadeh, Firouz; Dassinger, Thomas; Albers, Jessica; Repka, Michael A
2015-05-01
In this study, the principles of quality by design (QbD) have been uniquely applied to a pharmaceutical melt extrusion process for an immediate release formulation with a low melting model drug, ibuprofen. Two qualitative risk assessment tools - Fishbone diagram and failure mode effect analysis - were utilized to strategically narrow down the most influential parameters. Selected variables were further assessed using a Plackett-Burman screening study, which was upgraded to a response surface design consisting of the critical factors to study the interactions between the study variables. In process torque, glass transition temperature (Tg ) of the extrudates, assay, dissolution and phase change were measured as responses to evaluate the critical quality attributes (CQAs) of the extrudates. The effect of each study variable on the measured responses was analysed using multiple regression for the screening design and partial least squares for the optimization design. Experimental limits for formulation and process parameters to attain optimum processing have been outlined. A design space plot describing the domain of experimental variables within which the CQAs remained unchanged was developed. A comprehensive approach for melt extrusion product development based on the QbD methodology has been demonstrated. Drug loading concentrations between 40- 48%w/w and extrusion temperature in the range of 90-130°C were found to be the most optimum. © 2015 Royal Pharmaceutical Society.
Biobehavioral Outcomes Following Psychological Interventions for Cancer Patients
Andersen, Barbara L.
2007-01-01
Psychological interventions for adult cancer patients have primarily focused on reducing stress and enhancing quality of life. However, there has been expanded focus on biobehavioral outcomes—health behaviors, compliance, biologic responses, and disease outcomes—consistent with the Biobehavioral Model of cancer stress and disease course. The author reviewed this expanded focus in quasi-experimental and experimental studies of psychological interventions, provided methodologic detail, summarized findings, and highlighted novel contributions. A final section discussed methodologic issues, research directions, and challenges for the coming decade. PMID:12090371
Methodology of Computer-Aided Design of Variable Guide Vanes of Aircraft Engines
ERIC Educational Resources Information Center
Falaleev, Sergei V.; Melentjev, Vladimir S.; Gvozdev, Alexander S.
2016-01-01
The paper presents a methodology which helps to avoid a great amount of costly experimental research. This methodology includes thermo-gas dynamic design of an engine and its mounts, the profiling of compressor flow path and cascade design of guide vanes. Employing a method elaborated by Howell, we provide a theoretical solution to the task of…
Spatial Statistics of atmospheric particulate matter in China
NASA Astrophysics Data System (ADS)
Huang, Yongxiang; Wang, Yangjun; Liu, Yulu
2017-04-01
In this work, the spatial dynamics of the atmospheric particulate matters (resp. PM10 and PM2.5) are studied using turbulence methodologies. The hourly concentrations of particulate matter were released by the Chinese government (http://www.cnemc.cn). We first processed these data into daily average concentrations. Totally, there are 305 monitor stations with an observations period of 425 days. It is found experimentally that the spatial correlation function ρ(r) shows a log-law on the mesoscale range, i.e., 50 ≤ r ≤ 500 km, with an experimental scaling exponent β = 0.45. The spatial structure function shows a power-law behavior on the mesoscale range 90 ≤ r ≤ 500 km. The experimental scaling exponent ζ(q) is convex, showing that the intermittent correction is relevant in characterizing the spatial dynamics of particulate matter. The measured singularity spectrum f(α) also shows its multifractal nature. Experimentally, the particulate matter is more intermittent than the passive scalar, which could be partially due to the mesoscale movements of the atmosphere, and also due to local sources, such as local industry activities.
ERIC Educational Resources Information Center
Lee, Jang Ho
2012-01-01
Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…
ERIC Educational Resources Information Center
Gray, Ron
2014-01-01
Inquiry experiences in secondary science classrooms are heavily weighted toward experimentation. We know, however, that many fields of science (e.g., evolutionary biology, cosmology, and paleontology), while they may utilize experiments, are not justified by experimental methodologies. With the focus on experimentation in schools, these fields of…
Optimizing chemical conditioning for odour removal of undigested sewage sludge in drying processes.
Vega, Esther; Monclús, Hèctor; Gonzalez-Olmos, Rafael; Martin, Maria J
2015-03-01
Emission of odours during the thermal drying in sludge handling processes is one of the main sources of odour problems in wastewater treatment plants. The objective of this work was to assess the use of the response surface methodology as a technique to optimize the chemical conditioning process of undigested sewage sludges, in order to improve the dewaterability, and to reduce the odour emissions during the thermal drying of the sludge. Synergistic effects between inorganic conditioners (iron chloride and calcium oxide) were observed in terms of sulphur emissions and odour reduction. The developed quadratic models indicated that optimizing the conditioners dosage is possible to increase a 70% the dewaterability, reducing a 50% and 54% the emission of odour and volatile sulphur compounds respectively. The optimization of the conditioning process was validated experimentally. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rajamani, D.; Esakki, Balasubramanian
2017-09-01
Selective inhibition sintering (SIS) is a powder based additive manufacturing (AM) technique to produce functional parts with an inexpensive system compared with other AM processes. Mechanical properties of SIS fabricated parts are of high dependence on various process parameters importantly layer thickness, heat energy, heater feedrate, and printer feedrate. In this paper, examining the influence of these process parameters on evaluating mechanical properties such as tensile and flexural strength using Response Surface Methodology (RSM) is carried out. The test specimens are fabricated using high density polyethylene (HDPE) and mathematical models are developed to correlate the control factors to the respective experimental design response. Further, optimal SIS process parameters are determined using desirability approach to enhance the mechanical properties of HDPE specimens. Optimization studies reveal that, combination of high heat energy, low layer thickness, medium heater feedrate and printer feedrate yielded superior mechanical strength characteristics.
Research and development of an electrochemical biocide reactor
NASA Technical Reports Server (NTRS)
See, G. G.; Bodo, C. A.; Glennon, J. P.
1975-01-01
An alternate disinfecting process to chemical agents, heat, or radiation in an aqueous media has been studied. The process is called an electrochemical biocide and employs cyclic, low-level voltages at chemically inert electrodes to pass alternating current through water and, in the process, to destroy microorganisms. The paper describes experimental hardware, methodology, and results with a tracer microorganism (Escherichia coli). The results presented show the effects on microorganism kill of operating parameters, including current density (15 to 55 mA/sq cm (14 to 51 ASF)), waveform of applied electrical signal (square, triangular, sine), frequency of applied electrical signal (0.5 to 1.5 Hz), process water flow rate (100 to 600 cc/min (1.6 to 9.5 gph)), and reactor resident time (0 to 4 min). Comparisons are made between the disinfecting property of the electrochemical biocide and chlorine, bromine, and iodine.
Compositional Verification of a Communication Protocol for a Remotely Operated Vehicle
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar A.
2009-01-01
This paper presents the specification and verification in the Prototype Verification System (PVS) of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers. The protocol is defined as a stack-layered com- position of simpler protocols. It can be seen as the vertical composition of protocol layers, where each layer performs input and output message processing, and the horizontal composition of different processes concurrently inhabiting the same layer, where each process satisfies a distinct requirement. It is formally proven that the protocol components satisfy certain delivery guarantees. Compositional techniques are used to prove these guarantees also hold in the composed system. Although the protocol itself is not novel, the methodology employed in its verification extends existing techniques by automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.
Ogedey, Aysenur; Tanyol, Mehtap
2017-12-01
Leachate is the most difficult wastewater to be treated due to its complex content and high pollution release. For this reason, since it is not possible to be treated with a single process, a pre-treatment is needed. In the present study, a batch electrocoagulation reactor containing aluminum and iron electrodes was used to reduce chemical oxygen demand (COD) from landfill leachate (Tunceli, Turkey). Optimization of COD elimination was carried out with response surface methodology to describe the interaction effect of four main process independent parameters (current density, inter-electrode distance, pH and time of electrolysis). The optimum current density, inter-electrode distance, pH and time of electrolysis for maximum COD removal (43%) were found to be 19.42 mA/m 2 , 0.96 cm, 7.23 and 67.64 min, respectively. The results shown that the electrocoagulation process can be used as a pre-treatment step for leachate.
Jammalamadaka, Rajanikanth
2009-01-01
This report consists of a dissertation submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Graduate College, The University of Arizona, 2008. Spatio-temporal systems with heterogeneity in their structure and behavior have two major problems associated with them. The first one is that such complex real world systems extend over very large spatial and temporal domains and consume so many computational resources to simulate that they are infeasible to study with current computational platforms. The second one is that the data available for understanding such systems is limited because they are spread over space and time making it hard to obtain micro and macro measurements. This also makes it difficult to get the data for validation of their constituent processes while simultaneously considering their global behavior. For example, the valley fever fungus considered in this dissertation is spread over a large spatial grid in the arid Southwest and typically needs to be simulated over several decades of time to obtain useful information. It is also hard to get the temperature and moisture data (which are two critical factors on which the survival of the valley fever fungus depends) at every grid point of the spatial domain over the region of study. In order to address the first problem, we develop a method based on the discrete event system specification which exploits the heterogeneity in the activity of the spatio-temporal system and which has been shown to be effective in solving relatively simple partial differential equation systems. The benefit of addressing the first problem is that it now makes it feasible to address the second problem. We address the second problem by making use of a multilevel methodology based on modeling and simulation and systems theory. This methodology helps us in the construction of models with different resolutions (base and lumped models). This allows us to refine an initially constructed lumped model with detailed physics-based process models and assess whether they improve on the original lumped models. For that assessment, we use the concept of experimental frame to delimit where the improvement is needed. This allows us to work with the available data, improve the component models in their own experimental frame and then move them to the overall frame. In this dissertation, we develop a multilevel methodology and apply it to a valley fever model. Moreover, we study the model's behavior in a particular experimental frame of interest, namely the formation of new sporing sites.
Yousefzadeh, Samira; Matin, Atiyeh Rajabi; Ahmadi, Ehsan; Sabeti, Zahra; Alimohammadi, Mahmood; Aslani, Hassan; Nabizadeh, Ramin
2018-04-01
One of the most important aspects of environmental issues is the demand for clean and safe water. Meanwhile, disinfection process is one of the most important steps in safe water production. The present study aims at estimating the performance of UV, nano Zero-Valent Iron particles (nZVI, nano-Fe 0 ), and UV treatment with the addition of nZVI (combined process) for Bacillus subtilis spores inactivation. Effects of different factors on inactivation including contact time, initial nZVI concentration, UV irradiance and various aerations conditions were investigated. Response surface methodology, based on a five-level, two variable central composite design, was used to optimize target microorganism reduction and the experimental parameters. The results indicated that the disinfection time had the greatest positive impact on disinfection ability among the different selected independent variables. According to the results, it can be concluded that microbial reduction by UV alone was more effective than nZVI while the combined UV/nZVI process demonstrated the maximum log reduction. The optimum reduction of about 4 logs was observed at 491 mg/L of nZVI and 60 min of contact time when spores were exposed to UV radiation under deaerated condition. Therefore, UV/nZVI process can be suggested as a reliable method for Bacillus subtilis spores inactivation. Copyright © 2018. Published by Elsevier Ltd.
Experimental asbestos studies in the UK: 1912-1950.
Greenberg, Morris
2017-11-01
The asbestos industry originated in the UK in the 1870s. By 1898, asbestos had many applications and was reported to be one of the four leading causes of severe occupational disease. In 1912, the UK government sponsored an experimental study that reported that exposure to asbestos produced no more than a modicum of pulmonary fibrosis in guinea pigs. In the 1930s, the newly established Medical Research Council, with assistance from industry, sponsored a study of the effects of exposing animals to asbestos by injection (intratracheal and subcutaneous) and by inhalation in the factory environment. Government reports, publications, and contemporary records obtained by legal discovery have been reviewed in the context of the stage of scientific development and the history of the times. Experimenters were engaged in a learning process during the 1912-1950 period, and their reports of the effects of asbestos were inconsistent. Pathologists who studied the effects of asbestos experimentally, at whole animal, tissue and cellular levels, advanced experimental methodology and mechanistic knowledge. In the hands of public relations experts, however, research was exploited to preserve an industry and perpetuate preventable diseases, a practice that continues to this day. © 2017 Wiley Periodicals, Inc.
True detection limits in an experimental linearly heteroscedastic system.. Part 2
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.
NASA Astrophysics Data System (ADS)
Roosta, M.; Ghaedi, M.; Shokri, N.; Daneshfar, A.; Sahraei, R.; Asghari, A.
2014-01-01
The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE > 99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g-1).
Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia
2017-06-17
The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(ʟ-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4.
Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia
2017-01-01
The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(l-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4. PMID:28629130
Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti
2012-09-01
'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using Taguchi's design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the optimized parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal.
NASA Technical Reports Server (NTRS)
Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.
Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza
2016-09-01
The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tschopp, M. A.; Murdoch, H. A.; Kecskes, L. J.; Darling, K. A.
2014-06-01
It is a new beginning for innovative fundamental and applied science in nanocrystalline materials. Many of the processing and consolidation challenges that have haunted nanocrystalline materials are now more fully understood, opening the doors for bulk nanocrystalline materials and parts to be produced. While challenges remain, recent advances in experimental, computational, and theoretical capability have allowed for bulk specimens that have heretofore been pursued only on a limited basis. This article discusses the methodology for synthesis and consolidation of bulk nanocrystalline materials using mechanical alloying, the alloy development and synthesis process for stabilizing these materials at elevated temperatures, and the physical and mechanical properties of nanocrystalline materials with a focus throughout on nanocrystalline copper and a nanocrystalline Cu-Ta system, consolidated via equal channel angular extrusion, with properties rivaling that of nanocrystalline pure Ta. Moreover, modeling and simulation approaches as well as experimental results for grain growth, grain boundary processes, and deformation mechanisms in nanocrystalline copper are briefly reviewed and discussed. Integrating experiments and computational materials science for synthesizing bulk nanocrystalline materials can bring about the next generation of ultrahigh strength materials for defense and energy applications.
Integrating uniform design and response surface methodology to optimize thiacloprid suspension
Li, Bei-xing; Wang, Wei-chang; Zhang, Xian-peng; Zhang, Da-xia; Mu, Wei; Liu, Feng
2017-01-01
A model 25% suspension concentrate (SC) of thiacloprid was adopted to evaluate an integrative approach of uniform design and response surface methodology. Tersperse2700, PE1601, xanthan gum and veegum were the four experimental factors, and the aqueous separation ratio and viscosity were the two dependent variables. Linear and quadratic polynomial models of stepwise regression and partial least squares were adopted to test the fit of the experimental data. Verification tests revealed satisfactory agreement between the experimental and predicted data. The measured values for the aqueous separation ratio and viscosity were 3.45% and 278.8 mPa·s, respectively, and the relative errors of the predicted values were 9.57% and 2.65%, respectively (prepared under the proposed conditions). Comprehensive benefits could also be obtained by appropriately adjusting the amount of certain adjuvants based on practical requirements. Integrating uniform design and response surface methodology is an effective strategy for optimizing SC formulas. PMID:28383036
SUMOFLUX: A Generalized Method for Targeted 13C Metabolic Flux Ratio Analysis
Kogadeeva, Maria
2016-01-01
Metabolic fluxes are a cornerstone of cellular physiology that emerge from a complex interplay of enzymes, carriers, and nutrients. The experimental assessment of in vivo intracellular fluxes using stable isotopic tracers is essential if we are to understand metabolic function and regulation. Flux estimation based on 13C or 2H labeling relies on complex simulation and iterative fitting; processes that necessitate a level of expertise that ordinarily preclude the non-expert user. To overcome this, we have developed SUMOFLUX, a methodology that is broadly applicable to the targeted analysis of 13C-metabolic fluxes. By combining surrogate modeling and machine learning, we trained a predictor to specialize in estimating flux ratios from measurable 13C-data. SUMOFLUX targets specific flux features individually, which makes it fast, user-friendly, applicable to experimental design and robust in terms of experimental noise and exchange flux magnitude. Collectively, we predict that SUMOFLUX's properties realistically pave the way to high-throughput flux analyses. PMID:27626798
An Experience of Teaching of Astronomy in the 6th Year if Fundamental Education
NASA Astrophysics Data System (ADS)
Pereira, L. F.; Damasceno, L. E. F.; Nero, J. D.; Silva, S. J. S. da; Costa, M. B. C.; Aleixo, V. F. P.; Júnior, C. A. B. da S.
2017-12-01
This paper deals the question of astronomy teaching within the science discipline through: 1- analysis of the "Earth and Universe" axis of the National Curricular Parameters (NCPs); 2- profile of the professional who teaching the discipline; 3- analysis of the history and importance of experimentation for the teaching of Astronomy in Brazil. The main objective is to analyze the conception of students and teachers regarding the application of experimentation in the teaching of Astronomy in a hybrid class of 6º year with 14 students in the period recovery (07/2016) in an municipal public school of São Miguel of Guama-Pa. We highlight the teacher mishaps of the public school system and its difficulty in using teaching methodologies that go beyond the traditional, we emphasize, the problems with the training courses concerning the teaching of Astronomy and highlight the experimentation as tool indispensable in the construction of this teaching and learning process.
Real-Time Leaky Lamb Wave Spectrum Measurement and Its Application to NDE of Composites
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Bar-Cohen, Yoseph
1999-01-01
Numerous analytical and theoretical studies of the behavior of leaky Lamb waves (LLW) in composite materials were documented in the literature. One of the key issues that are constraining the application of this method as a practical tool is the amount of data that needs to be acquired and the slow process that is involved with such experiments. Recently, a methodology that allows quasi real-time acquisition of LLW dispersion data was developed. At each angle of incidence the reflection spectrum is available in real time from the experimental setup and it can be used for rapid detection of the defects. This technique can be used to rapidly acquire the various plate wave modes along various angles of incidence for the characterization of the material elastic properties. The experimental method and data acquisition technique will be described in this paper. Experimental data was used to examine a series of flaws including porosity and delaminations and demonstrated the efficiency of the developed technique.
New Experimental Capabilities and Theoretical Insights of High Pressure Compression Waves
NASA Astrophysics Data System (ADS)
Orlikowski, Daniel; Nguyen, Jeffrey H.; Patterson, J. Reed; Minich, Roger; Martin, L. Peter; Holmes, Neil C.
2007-12-01
Currently there are three platforms that offer quasi-isentropic compression or ramp-wave compression (RWC): light-gas gun, magnetic flux (Z-pinch), and laser. We focus here on the light-gas gun technique and on some current theoretical insights from experimental data. An impedance gradient through the length of the impactor provides the pressure pulse upon impact to the subject material. Applications and results are given concerning high-pressure strength and the liquid-to-solid, phase transition of water giving its first associated phase fraction history. We also introduce the Korteweg-deVries-Burgers equation as a means to understand the evolution of these RWC waves as they propagate through the thickness of the subject material. This model equation has the necessary competition between non-linear, dispersion, and dissipation processes, which is shown through observed structures that are manifested in the experimental particle velocity histories. Such methodology points towards a possibility of quantifying dissipation, through which RWC experiments may be analyzed.
Theory and methods in cultural neuroscience
Hariri, Ahmad R.; Harada, Tokiko; Mano, Yoko; Sadato, Norihiro; Parrish, Todd B.; Iidaka, Tetsuya
2010-01-01
Cultural neuroscience is an emerging research discipline that investigates cultural variation in psychological, neural and genomic processes as a means of articulating the bidirectional relationship of these processes and their emergent properties. Research in cultural neuroscience integrates theory and methods from anthropology, cultural psychology, neuroscience and neurogenetics. Here, we review a set of core theoretical and methodological challenges facing researchers when planning and conducting cultural neuroscience studies, and provide suggestions for overcoming these challenges. In particular, we focus on the problems of defining culture and culturally appropriate experimental tasks, comparing neuroimaging data acquired from different populations and scanner sites and identifying functional genetic polymorphisms relevant to culture. Implications of cultural neuroscience research for addressing current issues in population health disparities are discussed. PMID:20592044
Process optimization and analysis of microwave assisted extraction of pectin from dragon fruit peel.
Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J
2014-11-04
Microwave assisted extraction (MAE) technique was employed for the extraction of pectin from dragon fruit peel. The extracting parameters were optimized by using four-variable-three-level Box-Behnken design (BBD) coupled with response surface methodology (RSM). RSM analysis indicated good correspondence between experimental and predicted values. 3D response surface plots were used to study the interactive effects of process variables on extraction of pectin. The optimum extraction conditions for the maximum yield of pectin were power of 400 W, temperature of 45 °C, extracting time of 20 min and solid-liquid ratio of 24 g/mL. Under these conditions, 7.5% of pectin was extracted. Copyright © 2014 Elsevier Ltd. All rights reserved.
Supervised restoration of degraded medical images using multiple-point geostatistics.
Pham, Tuan D
2012-06-01
Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun J.; Fischer, David G.
2012-01-01
We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.
Artificial Intelligence Tools for Scaling Up of High Shear Wet Granulation Process.
Landin, Mariana
2017-01-01
The results presented in this article demonstrate the potential of artificial intelligence tools for predicting the endpoint of the granulation process in high-speed mixer granulators of different scales from 25L to 600L. The combination of neurofuzzy logic and gene expression programing technologies allowed the modeling of the impeller power as a function of operation conditions and wet granule properties, establishing the critical variables that affect the response and obtaining a unique experimental polynomial equation (transparent model) of high predictability (R 2 > 86.78%) for all size equipment. Gene expression programing allowed the modeling of the granulation process for granulators of similar and dissimilar geometries and can be improved by implementing additional characteristics of the process, as composition variables or operation parameters (e.g., batch size, chopper speed). The principles and the methodology proposed here can be applied to understand and control manufacturing process, using any other granulation equipment, including continuous granulation processes. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Yue, Yonghai; Yuchi, Datong; Guan, Pengfei; Xu, Jia; Guo, Lin; Liu, Jingyue
2016-01-01
To probe the nature of metal-catalysed processes and to design better metal-based catalysts, atomic scale understanding of catalytic processes is highly desirable. Here we use aberration-corrected environmental transmission electron microscopy to investigate the atomic scale processes of silver-based nanoparticles, which catalyse the oxidation of multi-wall carbon nanotubes. A direct semi-quantitative estimate of the oxidized carbon atoms by silver-based nanoparticles is achieved. A mechanism similar to the Mars–van Krevelen process is invoked to explain the catalytic oxidation process. Theoretical calculations, together with the experimental data, suggest that the oxygen molecules dissociate on the surface of silver nanoparticles and diffuse through the silver nanoparticles to reach the silver/carbon interfaces and subsequently oxidize the carbon. The lattice distortion caused by oxygen concentration gradient within the silver nanoparticles provides the direct evidence for oxygen diffusion. Such direct observation of atomic scale dynamics provides an important general methodology for investigations of catalytic processes. PMID:27406595
Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J
2014-12-19
The main objective of the present study is to investigate and optimize the Submerged fermentation (SMF) process parameters such as addition of coconut water, NaCl dose, incubation time and temperature on the production of extracellular polysaccharide (EPS) and biomass production using Lactobacillus confuses. Response surface methodology (RSM) coupled with four factors three level Box-Behnken design (BBD) was employed to model the SMF process. RSM analysis indicated good correspondence between experimental and predicted values. Three dimentional (3D) response surface plots were used to study the interactive effects of process variables on SMF process. The optimum process conditions for the maximum production of EPS and biomass were found to be as follows; addition of coconut water of 40%, NaCl dose of 15%, incubation time of 24h and temperature of 35°C. Under these conditions, 10.57 g/L of EPS and 3.9 g/L of biomass were produced. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intelligent systems/software engineering methodology - A process to manage cost and risk
NASA Technical Reports Server (NTRS)
Friedlander, Carl; Lehrer, Nancy
1991-01-01
A systems development methodology is discussed that has been successfully applied to the construction of a number of intelligent systems. This methodology is a refinement of both evolutionary and spiral development methodologies. It is appropriate for development of intelligent systems. The application of advanced engineering methodology to the development of software products and intelligent systems is an important step toward supporting the transition of AI technology into aerospace applications. A description of the methodology and the process model from which it derives is given. Associated documents and tools are described which are used to manage the development process and record and report the emerging design.
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-01-01
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds. PMID:27827883
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm.
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-11-03
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.
NASA Astrophysics Data System (ADS)
Rosa, Benoit; Brient, Antoine; Samper, Serge; Hascoët, Jean-Yves
2016-12-01
Mastering the additive laser manufacturing surface is a real challenge and would allow functional surfaces to be obtained without finishing. Direct Metal Deposition (DMD) surfaces are composed by directional and chaotic textures that are directly linked to the process principles. The aim of this work is to obtain surface topographies by mastering the operating process parameters. Based on experimental investigation, the influence of operating parameters on the surface finish has been modeled. Topography parameters and multi-scale analysis have been used in order to characterize the DMD obtained surfaces. This study also proposes a methodology to characterize DMD chaotic texture through topography filtering and 3D image treatment. In parallel, a new parameter is proposed: density of particles (D p). Finally, this study proposes a regression modeling between process parameters and density of particles parameter.
Adsorption of sunset yellow FCF from aqueous solution by chitosan-modified diatomite.
Zhang, Y Z; Li, J; Li, W J; Li, Y
2015-01-01
Sunset yellow (SY) FCF is a hazardous azo dye pollutant found in food processing effluent. This study investigates the use of diatomaceous earth with chitosan (DE@C) as a modified adsorbent for the removal of SY from wastewater. Fourier transform infrared spectroscopy results indicate the importance of functional groups during the adsorption of SY. The obtained N2 adsorption-desorption isotherm values accord well with IUPAC type II. Our calculations determined a surface area of 69.68 m2 g(-1) for DE@C and an average pore diameter of 4.85 nm. Using response surface methodology, optimized conditions of process variables for dye adsorption were achieved. For the adsorption of SY onto DE@C, this study establishes mathematical models for the optimization of pH, contact time and initial dye concentration. Contact time plays a greater role in the adsorption process than either pH or initial dye concentration. According to the adjusted correlation coefficient (adj-R2>0.97), the models used here are suitable for illustration of the adsorption process. Theoretical experimental conditions included a pH of 2.40, initial dye concentration of 113 mg L(-1) and 30.37 minutes of contact time. Experimental values for the adsorption rate (92.54%) were close to the values predicted by the models (95.29%).
78 FR 58307 - Statement of Organization, Functions, and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-23
... reproduction, natality, and mortality; (10) performs theoretical and experimental investigations into the... dissemination; (15) conducts methodological research on the tools for evaluation, utilization, and presentation... classification to states, local areas, other countries, and private organizations; (12) conducts methodological...
Kelley, Michael E; Shillingsburg, M Alice; Castro, M Jicel; Addison, Laura R; LaRue, Robert H; Martins, Megan P
2007-01-01
Although experimental analysis methodologies have been useful for identifying the function of a wide variety of target behaviors (e.g., Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994), only recently have such procedures been applied to verbal operants (Lerman et al., 2005). In the current study, we conducted a systematic replication of the methodology developed by Lerman et al. Participants were 4 children who had been diagnosed with developmental disabilities and who engaged in limited vocal behavior. The function of vocal behavior was assessed by exposing target vocal responses to experimental analyses. Results showed that experimental analyses were generally useful for identifying the functions of vocal behavior across all participants.
Model-Based Experimental Development of Passive Compliant Robot Legs from Fiberglass Composites
Lin, Shang-Chang; Hu, Chia-Jui; Lin, Pei-Chun
2015-01-01
We report on the methodology of developing compliant, half-circular, and composite robot legs with designable stiffness. First, force-displacement experiments on flat cantilever composites made by one or multifiberglass cloths are executed. By mapping the cantilever mechanics to the virtual spring model, the equivalent elastic moduli of the composites can be derived. Next, by using the model that links the curved beam mechanics back to the virtual spring, the resultant stiffness of the composite in a half-circular shape can be estimated without going through intensive experimental tryouts. The overall methodology has been experimentally validated, and the fabricated composites were used on a hexapod robot to perform walking and leaping behaviors. PMID:27065748
Thermal infrared imaging in psychophysiology: Potentialities and limits
Ioannou, Stephanos; Gallese, Vittorio; Merla, Arcangelo
2014-01-01
Functional infrared thermal imaging (fITI) is considered an upcoming, promising methodology in the emotional arena. Driven by sympathetic nerves, observations of affective nature derive from muscular activity subcutaneous blood flow as well as perspiration patterns in specific body parts. A review of 23 experimental procedures that employed fITI for investigations of affective nature is provided, along with the adopted experimental protocol and the thermal changes that took place on selected regions of interest in human and nonhuman subjects. Discussion is provided regarding the selection of an appropriate baseline, the autonomic nature of the thermal print, the experimental setup, methodological issues, limitations, and considerations, as well as future directions. PMID:24961292
Investigation of metallurgical coatings for automotive applications
NASA Astrophysics Data System (ADS)
Su, Jun Feng
Metallurgical coatings have been widely used in the automotive industry from component machining, engine daily running to body decoration due to their high hardness, wear resistance, corrosion resistance and low friction coefficient. With high demands in energy saving, weight reduction and limiting environmental impact, the use of new materials such as light Aluminum/magnesium alloys with high strength-weight ratio for engine block and advanced high-strength steel (AHSS) with better performance in crash energy management for die stamping, are increasing. However, challenges are emerging when these new materials are applied such as the wear of the relative soft light alloys and machining tools for hard AHSS. The protective metallurgical coatings are the best option to profit from these new materials' advantages without altering largely in mass production equipments, machinery, tools and human labor. In this dissertation, a plasma electrolytic oxidation (PEO) coating processing on aluminum alloys was introduced in engine cylinder bores to resist wear and corrosion. The tribological behavior of the PEO coatings under boundary and starve lubrication conditions was studied experimentally and numerically for the first time. Experimental results of the PEO coating demonstrated prominent wear resistance and low friction, taking into account the extreme working conditions. The numerical elastohydrodynamic lubrication (EHL) and asperity contact based tribological study also showed a promising approach on designing low friction and high wear resistant PEO coatings. Other than the fabrication of the new coatings, a novel coating evaluation methodology, namely, inclined impact sliding tester was presented in the second part of this dissertation. This methodology has been developed and applied in testing and analyzing physical vapor deposition (PVD)/ chemical vapor deposition (CVD)/PEO coatings. Failure mechanisms of these common metallurgical hard coatings were systematically studied and summarized via the new testing methodology. Field tests based on the new coating characterization technique proved that this methodology is reliable, effective and economical.
NASA Astrophysics Data System (ADS)
Perrault, Matthieu; Gueguen, Philippe; Aldea, Alexandru; Demetriu, Sorin
2013-12-01
The lack of knowledge concerning modelling existing buildings leads to signifiant variability in fragility curves for single or grouped existing buildings. This study aims to investigate the uncertainties of fragility curves, with special consideration of the single-building sigma. Experimental data and simplified models are applied to the BRD tower in Bucharest, Romania, a RC building with permanent instrumentation. A three-step methodology is applied: (1) adjustment of a linear MDOF model for experimental modal analysis using a Timoshenko beam model and based on Anderson's criteria, (2) computation of the structure's response to a large set of accelerograms simulated by SIMQKE software, considering twelve ground motion parameters as intensity measurements (IM), and (3) construction of the fragility curves by comparing numerical interstory drift with the threshold criteria provided by the Hazus methodology for the slight damage state. By introducing experimental data into the model, uncertainty is reduced to 0.02 considering S d ( f 1) as seismic intensity IM and uncertainty related to the model is assessed at 0.03. These values must be compared with the total uncertainty value of around 0.7 provided by the Hazus methodology.
DOT National Transportation Integrated Search
2008-03-01
This document, the AMS Experimental Plan, lays out the scope of analysis that will be conducted through the application of the AMS methodology to the Test Corridor. The specific objectives of the Experimental Plan are: create an AMS framework that id...
A Response to Paul Stapleton's "Critiquing Research Methodology"
ERIC Educational Resources Information Center
Ross, Steven J.
2006-01-01
Paul Stapleton's (2006) critique of quantitative research brings to the surface some common interpretive problems arising when experimental and quasi-experimental research designs are compared. While Stapleton may be correct in pointing out the superiority of experimental research designs because they best eliminate the influence of extraneous…
Density functional theory and an experimentally-designed energy functional of electron density.
Miranda, David A; Bueno, Paulo R
2016-09-21
We herein demonstrate that capacitance spectroscopy (CS) experimentally allows access to the energy associated with the quantum mechanical ground state of many-electron systems. Priorly, electrochemical capacitance, C [small mu, Greek, macron] [ρ], was previously understood from conceptual and computational density functional theory (DFT) calculations. Thus, we herein propose a quantum mechanical experiment-based variational method for electron charging processes based on an experimentally-designed functional of the ground state electron density. In this methodology, the electron state density, ρ, and an energy functional of the electron density, E [small mu, Greek, macron] [ρ], can be obtained from CS data. CS allows the derivative of the electrochemical potential with respect to the electron density, (δ[small mu, Greek, macron][ρ]/δρ), to be obtained as a unique functional of the energetically minimised system, i.e., β/C [small mu, Greek, macron] [ρ], where β is a constant (associated with the size of the system) and C [small mu, Greek, macron] [ρ] is an experimentally observable quantity. Thus the ground state energy (at a given fixed external potential) can be obtained simply as E [small mu, Greek, macron] [ρ], from the experimental measurement of C [small mu, Greek, macron] [ρ]. An experimental data-set was interpreted to demonstrate the potential of this quantum mechanical experiment-based variational principle.
NASA Astrophysics Data System (ADS)
Böttcher, J.; Jahn, M.; Tatzko, S.
2017-12-01
Pseudoelastic shape memory alloys exhibit a stress-induced phase transformation which leads to high strains during deformation of the material. The stress-strain characteristic during this thermomechanical process is hysteretic and results in the conversion of mechanical energy into thermal energy. This energy conversion allows for the use of shape memory alloys in vibration reduction. For the application of shape memory alloys as vibration damping devices a dynamic modeling of the material behavior is necessary. In this context experimentally determined material parameters which accurately represent the material behavior are essential for a reliable material model. Subject of this publication is the declaration of suitable material parameters for pseudoelastic shape memory alloys and the methodology of their identification from experimental investigations. The used test rig was specifically designed for the characterization of pseudoelastic shape memory alloys.
A Vibration-Based Strategy for Health Monitoring of Offshore Pipelines' Girth-Welds
Razi, Pejman; Taheri, Farid
2014-01-01
This study presents numerical simulations and experimental verification of a vibration-based damage detection technique. Health monitoring of a submerged pipe's girth-weld against an advancing notch is attempted. Piezoelectric transducers are bonded on the pipe for sensing or actuation purposes. Vibration of the pipe is excited by two means: (i) an impulsive force; (ii) using one of the piezoelectric transducers as an actuator to propagate chirp waves into the pipe. The methodology adopts the empirical mode decomposition (EMD), which processes vibration data to establish energy-based damage indices. The results obtained from both the numerical and experimental studies confirm the integrity of the approach in identifying the existence, and progression of the advancing notch. The study also discusses and compares the performance of the two vibration excitation means in damage detection. PMID:25225877
Protein-protein interaction predictions using text mining methods.
Papanikolaou, Nikolas; Pavlopoulos, Georgios A; Theodosiou, Theodosios; Iliopoulos, Ioannis
2015-03-01
It is beyond any doubt that proteins and their interactions play an essential role in most complex biological processes. The understanding of their function individually, but also in the form of protein complexes is of a great importance. Nowadays, despite the plethora of various high-throughput experimental approaches for detecting protein-protein interactions, many computational methods aiming to predict new interactions have appeared and gained interest. In this review, we focus on text-mining based computational methodologies, aiming to extract information for proteins and their interactions from public repositories such as literature and various biological databases. We discuss their strengths, their weaknesses and how they complement existing experimental techniques by simultaneously commenting on the biological databases which hold such information and the benchmark datasets that can be used for evaluating new tools. Copyright © 2014 Elsevier Inc. All rights reserved.
An investigation of spectral characteristics of water-glucose solutions
NASA Astrophysics Data System (ADS)
Lastovskaia, Elena A.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.
2016-04-01
One of the problems of modern medical device engineering is the development of an instrument for non-invasive monitoring of glucose levels in the blood. The urgency of this task is ensured by the following facts: the increase in the incidence of diabetes, the need for regular monitoring of blood sugar, and pain of modern methods of glycemia measurement. The problem can be solved with the help of a spectrophotometric method. This report is devoted to the investigation of spectral characteristics of glucose solution with various molar concentrations. The authors proposed the methodology of experimental research and data processing algorithm. The results of the experimental studies confirmed potential opportunity of blood sugar control by spectrophotometric method. Further research is expected to continue by the way of complication of the composition of the object from an aqueous solution of glucose to biological object.
Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel
2017-05-01
Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10 5 W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10 5 W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Nieves, Ian
Dynamic finite element analysis (FEA) was used to verify the ability of a novel percussion instrument to characterize the composition and structure of laminated materials and glass columns and to elucidate key facets of this process. Initial simulations modeling the percussion process with varying probe geometries were performed to access which configuration most accurately represented in situ diagnostic activity. Percussion testing of monoliths and laminated duplex scaffolds consisting of PTFE and 6061 Al was simulated to assess the ability of the numeric methodology to model intrinsic damping in laminated scaffolds and determine the potential contributions of size effects, gripping configurations, and probe friction to the loading response of the material being tested. Percussion testing of laminated scaffolds and monoliths composed of either PMMA or PLGA was modeled to investigate the effects of defects on the impact response and to evaluate promising strategies for enhancing damping that promotes tissue regeneration in biomedical materials. Percussion testing of virgin and cracked glass columns was modeled and the resulting probe acceleration predictions compared to corresponding experimental findings to evaluate the overall accuracy of the methodology and to discern its capacity for elucidating facets of defect detection in rigid materials. Overall, the modeling the results validated the effectiveness of the numeric methodology for modeling and elucidating the mechanics of percussion testing and suggested strategies whereby this procedure can facilitate the development of innovative biomedical materials designed to promote tissue regeneration.
NASA Technical Reports Server (NTRS)
Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen
1992-01-01
Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.
ERIC Educational Resources Information Center
Lauckner, Heidi; Paterson, Margo; Krupa, Terry
2012-01-01
Often, research projects are presented as final products with the methodologies cleanly outlined and little attention paid to the decision-making processes that led to the chosen approach. Limited attention paid to these decision-making processes perpetuates a sense of mystery about qualitative approaches, particularly for new researchers who will…
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
Jorge, Aguirre Joya; Heliodoro, De La Garza Toledo; Alejandro, Zugasti Cruz; Ruth, Belmares Cerda; Noé, Aguilar Cristóbal
2013-01-01
Objective To extract, quantify, and evaluate the phenolic content in Opuntia ficus-indica skin for their antioxidant capacity with three different methods (ABTS, DPPH, and lipid oxidation) and to optimize the extraction conditions (time, temperature and ethanol concentration) in a reflux system. Methods The extraction process was done using a reflux system. A San Cristobal II experimental design with three variables and three levels was used. The variables evaluated were time of extraction (h), concentration of ethanol (%, v/v) and temperature (°C). The extraction process was optimized using a response surface methodology. Results It was observed that at higher temperature more phenolic compounds were extracted, but the antioxidant capacity was decreased. The optimum conditions for phenolic compounds extraction and antioxidant capacity mixing the three methods were as follows: 45% of ethanol, 80 °C and 2 hours of extraction. Values obtained in our results are little higher that other previously reported. Conclusions It can be concluded the by-products of Opuntia ficus-indica represent a good source of natural antioxidants with possible applications in food, cosmetics or drugs industries. PMID:23730555
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
Rizzi, Aurora; Raddadi, Noura; Sorlini, Claudia; Nordgrd, Lise; Nielsen, Kaare Magne; Daffonchio, Daniele
2012-01-01
The fate of dietary DNA in the gastrointestinal tract (GIT) of animals has gained renewed interest after the commercial introduction of genetically modified organisms (GMO). Among the concerns regarding GM food, are the possible consequences of horizontal gene transfer (HGT) of recombinant dietary DNA to bacteria or animal cells. The exposure of the GIT to dietary DNA is related to the extent of food processing, food composition, and to the level of intake. Animal feeding studies have demonstrated that a minor amount of fragmented dietary DNA may resist the digestive process. Mammals have been shown to take up dietary DNA from the GIT, but stable integration and expression of internalized DNA has not been demonstrated. Despite the ability of several bacterial species to acquire external DNA by natural transformation, in vivo transfer of dietary DNA to bacteria in the intestine has not been detected in the few experimental studies conducted so far. However, major methodological limitations and knowledge gaps of the mechanistic aspects of HGT calls for methodological improvements and further studies to understand the fate of various types of dietary DNA in the GIT.
Das, Dipa; Meikap, Bhim C
2017-10-15
The present research describes the optimal adsorption condition for methylene blue (MB). The adsorbent used here was monoethanol amine-impregnated activated carbon (MEA-AC) prepared from green coconut shell. Response surface methodology (RSM) is the multivariate statistical technique used for the optimization of the process variables. The central composite design is used to determine the effect of activation temperature, activation time and impregnation ratio on the MB removal. The percentage (%) MB adsorption by MEA-AC is evaluated as a response of the system. A quadratic model was developed for response. From the analysis of variance, the factor which was the most influential on the experimental design response has been identified. The optimum condition for the preparation of MEA-AC from green coconut shells is the temperature of activation 545.6°C, activation time of 41.64 min and impregnation ratio of 0.33 to achieve the maximum removal efficiency of 98.21%. At the same optimum parameter, the % MB removal from the textile-effluent industry was examined and found to be 96.44%.
Method for laser spot welding monitoring
NASA Astrophysics Data System (ADS)
Manassero, Giorgio
1994-09-01
As more powerful solid state laser sources appear on the market, new applications become technically possible and important from the economical point of view. For every process a preliminary optimization phase is necessary. The main parameters, used for a welding application by a high power Nd-YAG laser, are: pulse energy, pulse width, repetition rate and process duration or speed. In this paper an experimental methodology, for the development of an electrooptical laser spot welding monitoring system, is presented. The electromagnetic emission from the molten pool was observed and measured with appropriate sensors. The statistical method `Parameter Design' was used to obtain an accurate analysis of the process parameter that influence process results. A laser station with a solid state laser coupled to an optical fiber (1 mm in diameter) was utilized for the welding tests. The main material used for the experimental plan was zinc coated steel sheet 0.8 mm thick. This material and the related spot welding technique are extensively used in the automotive industry, therefore, the introduction of laser technology in production line will improve the quality of the final product. A correlation, between sensor signals and `through or not through' welds, was assessed. The investigation has furthermore shown the necessity, for the modern laser production systems, to use multisensor heads for process monitoring or control with more advanced signal elaboration procedures.
Chidambaram, Ramalingam
2015-01-01
Biosorption is a promising alternative method to replace the existing conventional technique for Cr(VI) removal from the industrial effluent. In the present experimental design, the removal of Cr(VI) from the aqueous solution was studied by Aspergillus niger MSR4 under different environmental conditions in the batch systems. The optimum conditions of biosorption were determined by investigating pH (2.0) and temperature (27°C). The effects of parameters such as biomass dosage (g/L), initial Cr(VI) concentration (mg/L) and contact time (min) on Cr(VI) biosorption were analyzed using a three parameter Box–Behnken design (BBD). The experimental data well fitted to the Langmuir isotherm, in comparison to the other isotherm models tested. The results of the D-R isotherm model suggested that a chemical ion-exchange mechanism was involved in the biosorption process. The biosorption process followed the pseudo-second-order kinetic model, which indicates that the rate limiting step is chemisorption process. Fourier transform infrared (FT-IR) spectroscopic studies revealed the possible involvement of functional groups, such as hydroxyl, carboxyl, amino and carbonyl group in the biosorption process. The thermodynamic parameters for Cr(VI) biosorption were also calculated, and the negative ∆Gº values indicated the spontaneous nature of biosorption process. PMID:25786227
A multi-scale modelling procedure to quantify hydrological impacts of upland land management
NASA Astrophysics Data System (ADS)
Wheater, H. S.; Jackson, B.; Bulygina, N.; Ballard, C.; McIntyre, N.; Marshall, M.; Frogbrook, Z.; Solloway, I.; Reynolds, B.
2008-12-01
Recent UK floods have focused attention on the effects of agricultural intensification on flood risk. However, quantification of these effects raises important methodological issues. Catchment-scale data have proved inadequate to support analysis of impacts of land management change, due to climate variability, uncertainty in input and output data, spatial heterogeneity in land use and lack of data to quantify historical changes in management practices. Manipulation experiments to quantify the impacts of land management change have necessarily been limited and small scale, and in the UK mainly focused on the lowlands and arable agriculture. There is a need to develop methods to extrapolate from small scale observations to predict catchment-scale response, and to quantify impacts for upland areas. With assistance from a cooperative of Welsh farmers, a multi-scale experimental programme has been established at Pontbren, in mid-Wales, an area of intensive sheep production. The data have been used to support development of a multi-scale modelling methodology to assess impacts of agricultural intensification and the potential for mitigation of flood risk through land use management. Data are available from replicated experimental plots under different land management treatments, from instrumented field and hillslope sites, including tree shelter belts, and from first and second order catchments. Measurements include climate variables, soil water states and hydraulic properties at multiple depths and locations, tree interception, overland flow and drainflow, groundwater levels, and streamflow from multiple locations. Fine resolution physics-based models have been developed to represent soil and runoff processes, conditioned using experimental data. The detailed models are used to calibrate simpler 'meta- models' to represent individual hydrological elements, which are then combined in a semi-distributed catchment-scale model. The methodology is illustrated using field and catchment-scale simulations to demonstrate the the response of improved and unimproved grassland, and the potential effects of land management interventions, including farm ponds, tree shelter belts and buffer strips. It is concluded that the methodology developed has the potential to represent and quantify catchment-scale effects of upland management; continuing research is extending the work to a wider range of upland environments and land use types, with the aim of providing generic simulation tools that can be used to provide strategic policy guidance.
Summary: Experimental validation of real-time fault-tolerant systems
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Choi, G. S.
1992-01-01
Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.
Optical distributed sensors for feedback control: Characterization of photorefractive resonator
NASA Technical Reports Server (NTRS)
Indebetouw, Guy; Lindner, D. K.
1992-01-01
The aim of the project was to explore, define, and assess the possibilities of optical distributed sensing for feedback control. This type of sensor, which may have some impacts in the dynamic control of deformable structures and the monitoring of small displacements, can be divided into data acquisition, data processing, and control design. Analogue optical techniques, because they are noninvasive and afford massive parallelism may play a significant role in the acquisition and the preprocessing of the data for such a sensor. Assessing these possibilities was the aim of the first stage of this project. The scope of the proposed research was limited to: (1) the characterization of photorefractive resonators and the assessment of their possible use as a distributed optical processing element; and (2) the design of a control system utilizing signals from distributed sensors. The results include a numerical and experimental study of the resonator below threshold, an experimental study of the effect of the resonator's transverse confinement on its dynamics above threshold, a numerical study of the resonator above threshold using a modal expansion approach, and the experimental test of this model. A detailed account of each investigation, including methodology and analysis of the results are also included along with reprints of published and submitted papers.
DOT National Transportation Integrated Search
1980-06-01
This report presents the findings of a workshop on experimental research in the area of drugs and highway safety. Complementing studies of drug use in different driving populations, experimentation here refers to studies performed under controlled co...
Structural health monitoring methodology for aircraft condition-based maintenance
NASA Astrophysics Data System (ADS)
Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre
2001-06-01
Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.
Human perception testing methodology for evaluating EO/IR imaging systems
NASA Astrophysics Data System (ADS)
Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.
2018-04-01
The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.
Developing and executing quality improvement projects (concept, methods, and evaluation).
Likosky, Donald S
2014-03-01
Continuous quality improvement, quality assurance, cycles of change--these words of often used to express the process of using data to inform and improve clinical care. Although many of us have been exposed to theories and practice of experimental work (e.g., randomized trial), few of us have been similarly exposed to the science underlying quality improvement. Through the lens of a single-center quality improvement study, this article exposes the reader to methodology for conducting such studies. The reader will gain an understanding of these methods required to embark on such a study.
Beyond Self-Report: Emerging Methods for Capturing Individual Differences in Decision-Making Process
Connors, Brenda L.; Rende, Richard; Colton, Timothy J.
2016-01-01
People vary in the way in which they approach decision-making, which impacts real-world behavior. There has been a surge of interest in moving beyond reliance on self-report measures to capture such individual differences. Particular emphasis has been placed on devising and applying a range of methodologies that include experimental, neuroscience, and observational paradigms. This paper provides a selective review of recent studies that illustrate the methods and yield of these approaches in terms of generating a deeper understanding of decision-making style and the notable differences that can be found across individuals. PMID:26973589
Connors, Brenda L; Rende, Richard; Colton, Timothy J
2016-01-01
People vary in the way in which they approach decision-making, which impacts real-world behavior. There has been a surge of interest in moving beyond reliance on self-report measures to capture such individual differences. Particular emphasis has been placed on devising and applying a range of methodologies that include experimental, neuroscience, and observational paradigms. This paper provides a selective review of recent studies that illustrate the methods and yield of these approaches in terms of generating a deeper understanding of decision-making style and the notable differences that can be found across individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livescu, Veronica; Bronkhorst, Curt Allan; Vander Wiel, Scott Alan
Many challenges exist with regard to understanding and representing complex physical processes involved with ductile damage and failure in polycrystalline metallic materials. Currently, the ability to accurately predict the macroscale ductile damage and failure response of metallic materials is lacking. Research at Los Alamos National Laboratory (LANL) is aimed at building a coupled experimental and computational methodology that supports the development of predictive damage capabilities by: capturing real distributions of microstructural features from real material and implementing them as digitally generated microstructures in damage model development; and, distilling structure-property information to link microstructural details to damage evolution under a multitudemore » of loading states.« less
Agent-based modeling and systems dynamics model reproduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Macal, C. M.
2009-01-01
Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.
Dissociative Ionization of Pyridine by Electron Impact
NASA Technical Reports Server (NTRS)
Dateo, Christopher; Huo, Winifred; Kwak, Dochan (Technical Monitor)
2002-01-01
In order to understand the damage of biomolecules by electrons, a process important in radiation damage, we undertake a study of the dissociative ionization (DI) of pyridine (C5H5N) from the low-lying ionization channels. The methodology used is the same as in the benzene study. While no experimental DI data are available, we compare the dissociation products from our calculations with the dissociative photoionization measurements of Tixier et al. using dipole (e, e(+) ion) coincidence spectroscopy. Comparisons with the DI of benzene is also made so as to understand the difference in DI between a heterocyclic and an aromatic molecule.
Statechart-based design controllers for FPGA partial reconfiguration
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo
2015-09-01
Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator); Lock, B. F.
1976-01-01
The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.
NASA Astrophysics Data System (ADS)
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.
Wichmann, Theresia; Buchheim, Anna; Menning, Hans; Schenk, Ingmar; George, Carol; Pokorny, Dan
2016-01-01
In the last few decades, there has been an increase of experimental research on automatic unconscious processes concerning the evaluation of the self and others. Previous research investigated implicit aspects of romantic attachment using self-report measures as explicit instruments for assessing attachment style. There is a lack of experimental procedures feasible for neurobiological settings. We developed a reaction time (RT) experiment using a narrative attachment measure with an implicit nature and were interested to capture automatic processes, when the individuals’ attachment system is activated. We aimed to combine attachment methodology with knowledge from implicit measures by using a decision RT paradigm. This should serve as a means to capture implicit aspects of attachment. This experiment evaluated participants’ response to prototypic attachment sentences in association with their own attachment classification, measured with the Adult Attachment Projective Picture System (AAP). First the AAP was administered as the standardized interview procedure to 30 healthy participants, which were classified into a secure or insecure group. In the following experimental session, both experimenter and participants were blind with respect to classifications. One hundred twenty eight prototypically secure or insecure sentences related to the eight pictures of the AAP were presented to the participants. Their response and RTs were recorded. Based on the response (accept, reject) a continuous security scale was defined. Both the AAP classification and security scale were related to the RTs. Differentiated study hypotheses were confirmed for insecure sentences, which were accepted faster by participants from the insecure attachment group (or with lower security scale), and rejected faster by participants from secure attachment group (or with higher security scale). The elaborating unconscious processes were more activated by insecure sentences with potential attachment conflicts. The introduced paradigm is able to contribute to an experimental approach in attachment research. The RT analysis with the narrative procedure might be of interest for a broader variety of questions in experimental and neurophysiological settings to capture unconscious processes in association with internal working models of attachment. An electrophysiological model based on preliminary research is proposed for assessing the preconscious neuronal network related to secure or insecure attachment representations. PMID:27853426
Methodological individualism in experimental games: not so easily dismissed.
Krueger, Joachim I
2008-06-01
Orthodox game theory and social preference models cannot explain why people cooperate in many experimental games or how they manage to coordinate their choices. The theory of evidential decision making provides a solution, based on the idea that people tend to project their own choices onto others, whatever these choices might be. Evidential decision making preserves methodological individualism, and it works without recourse to social preferences. Rejecting methodological individualism, team reasoning is a thinly disguised resurgence of the group mind fallacy, and the experiments reported by Colman et al. [Colman, A. M., Pulford, B. D., & Rose, J. (this issue). Collective rationality in interactive decisions: Evidence for team reasoning. Acta Psychologica, doi:10.1016/j.actpsy.2007.08.003.] do not offer evidence that uniquely supports team reasoning.
Zhou, Xian-Jiao; Guo, Wan-Qian; Yang, Shan-Shan; Ren, Nan-Qi
2012-02-01
This research set up an ultrasonic-assisted ozone oxidation process (UAOOP) to decolorize the triphenylmethane dyes wastewater. Five factors - temperature, initial pH, reaction time, ultrasonic power (low frequency 20 kHz), and ozone concentration - were investigated. Response surface methodology was used to find out the major factors influencing color removal rate and the interactions between these factors, and optimized the operating parameters as well. Under the experimental conditions: reaction temperature 39.81 °C, initial pH 5.29, ultrasonic power 60 W and ozone concentration 0.17 g/L, the highest color removals were achieved with 10 min reaction time and the initial concentration of the MG solution was 1000 mg/L. The optimal results indicated that the UAOOP was a rapid, efficient and low energy consumption technique to decolorize the high concentration MG wastewater. The predicted model was approximately in accordance with the experimental cases with correlation coefficients R(2) and R(adj)(2) of 0.9103 and 0.8386. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Triplett, Michael D.; Rathman, James F.
2009-04-01
Using statistical experimental design methodologies, the solid lipid nanoparticle design space was found to be more robust than previously shown in literature. Formulation and high shear homogenization process effects on solid lipid nanoparticle size distribution, stability, drug loading, and drug release have been investigated. Experimentation indicated stearic acid as the optimal lipid, sodium taurocholate as the optimal cosurfactant, an optimum lecithin to sodium taurocholate ratio of 3:1, and an inverse relationship between mixing time and speed and nanoparticle size and polydispersity. Having defined the base solid lipid nanoparticle system, β-carotene was incorporated into stearic acid nanoparticles to investigate the effects of introducing a drug into the base solid lipid nanoparticle system. The presence of β-carotene produced a significant effect on the optimal formulation and process conditions, but the design space was found to be robust enough to accommodate the drug. β-Carotene entrapment efficiency averaged 40%. β-Carotene was retained in the nanoparticles for 1 month. As demonstrated herein, solid lipid nanoparticle technology can be sufficiently robust from a design standpoint to become commercially viable.
Experimental study on spray characteristics of alternate jet fuels using Phase Doppler Anemometry
NASA Astrophysics Data System (ADS)
Kannaiyan, Kumaran; Sadr, Reza
2013-11-01
Gas-to-Liquid (GTL) fuels have gained global attention due to their cleaner combustion characteristics. The chemical and physical properties of GTL jet fuels are different from conventional jet fuels owing to the difference in their production methodology. It is important to study the spray characteristics of GTL jet fuels as the change of physical properties can affect atomization, mixing, evaporation and combustion process, ultimately affecting emission process. In this work, spray characteristics of two GTL synthetic jet fuels are studied using a pressure-swirl nozzle at different injection pressures and atmospheric ambient condition. Phase Doppler Anemometry (PDA) measurements of droplet size and velocity are compared with those of regular Jet A-1 fuel at several axial and radial locations downstream of the nozzle exit. Experimental results show that although the GTL fuels have different physical properties such as viscosity, density, and surface tension, among each other the resultant change in the spray characteristics is insignificant. Furthermore, the presented results show that GTL fuel spray characteristics exhibit close similarity to those of Jet A-1 fuel. Funded by Qatar Science and Technology Park.
Experimental investigation of bioethanol liquid phase dehydration using natural clinoptilolite.
Karimi, Samira; Ghobadian, Barat; Omidkhah, Mohammad-Reza; Towfighi, Jafar; Tavakkoli Yaraki, Mohammad
2016-05-01
An experimental study of bioethanol adsorption on natural Iranian clinoptilolite was carried out. Dynamic breakthrough curves were used to investigate the best adsorption conditions in bioethanol liquid phase. A laboratory setup was designed and fabricated for this purpose. In order to find the best operating conditions, the effect of liquid pressure, temperature and flow rate on breakthrough curves and consequently, maximum ethanol uptake by adsorbent were studied. The effects of different variables on final bioethanol concentration were investigated using Response Surface Methodology (RSM). The results showed that by working at optimum condition, feed with 96% (v/v) initial ethanol concentration could be purified up to 99.9% (v/v). In addition, the process was modeled using Box-Behnken model and optimum operational conditions to reach 99.9% for final ethanol concentration were found equal to 10.7 °C, 4.9 bar and 8 mL/min for liquid temperature, pressure and flow rate, respectively. Therefore, the selected natural Iranian clinoptilolite was found to be a promising adsorbent material for bioethanol dehydration process.
Experimental investigation of bioethanol liquid phase dehydration using natural clinoptilolite
Karimi, Samira; Ghobadian, Barat; Omidkhah, Mohammad-Reza; Towfighi, Jafar; Tavakkoli Yaraki, Mohammad
2016-01-01
An experimental study of bioethanol adsorption on natural Iranian clinoptilolite was carried out. Dynamic breakthrough curves were used to investigate the best adsorption conditions in bioethanol liquid phase. A laboratory setup was designed and fabricated for this purpose. In order to find the best operating conditions, the effect of liquid pressure, temperature and flow rate on breakthrough curves and consequently, maximum ethanol uptake by adsorbent were studied. The effects of different variables on final bioethanol concentration were investigated using Response Surface Methodology (RSM). The results showed that by working at optimum condition, feed with 96% (v/v) initial ethanol concentration could be purified up to 99.9% (v/v). In addition, the process was modeled using Box–Behnken model and optimum operational conditions to reach 99.9% for final ethanol concentration were found equal to 10.7 °C, 4.9 bar and 8 mL/min for liquid temperature, pressure and flow rate, respectively. Therefore, the selected natural Iranian clinoptilolite was found to be a promising adsorbent material for bioethanol dehydration process. PMID:27222748
Margaritelis, Nikos V; Cobley, James N; Paschalis, Vassilis; Veskoukis, Aristidis S; Theodorou, Anastasios A; Kyparos, Antonios; Nikolaidis, Michalis G
2016-04-01
The equivocal role of reactive species and redox signaling in exercise responses and adaptations is an example clearly showing the inadequacy of current redox biology research to shed light on fundamental biological processes in vivo. Part of the answer probably relies on the extreme complexity of the in vivo redox biology and the limitations of the currently applied methodological and experimental tools. We propose six fundamental principles that should be considered in future studies to mechanistically link reactive species production to exercise responses or adaptations: 1) identify and quantify the reactive species, 2) determine the potential signaling properties of the reactive species, 3) detect the sources of reactive species, 4) locate the domain modified and verify the (ir)reversibility of post-translational modifications, 5) establish causality between redox and physiological measurements, 6) use selective and targeted antioxidants. Fulfilling these principles requires an idealized human experimental setting, which is certainly a utopia. Thus, researchers should choose to satisfy those principles, which, based on scientific evidence, are most critical for their specific research question. Copyright © 2015 Elsevier Inc. All rights reserved.
Bayar, Nadia; Bouallegue, Tahani; Achour, Mabrouka; Kriaa, Mouna; Bougatef, Ali; Kammoun, Radhouane
2017-11-15
Ultrasonic assisted extraction (UAE) of pectin from Opuntia ficus indica (OFI) cladodes after mucilage removal was attempted using the response surface methodology. The process variables were optimized by the isovariant central composite design in order to improve the pectin extraction yield. The optimum condition obtained was: sonication time 70min, temperature 70°C, pH 1.5 and the water-material ratio 30ml/g. This condition was validated and the performance of experimental extraction was 18.14%±1.41%, which was closely linked to the predicted value (19.06%). Thus, UAE present a promising alternative to conventional extraction process thanks to its high efficiency which was achieved in less time and at lower temperatures. The pectin extracted by UAE from OFI cladodes (UAEPC) has a low degree of esterification, high uronic acid content, important functional properties and good anti-radical activity. These results are in favor of the use of UAEPC as potential additive in food industry. Copyright © 2017. Published by Elsevier Ltd.
Factors that influence the tribocharging of pulverulent materials in compressed-air devices
NASA Astrophysics Data System (ADS)
Das, S.; Medles, K.; Mihalcioiu, A.; Beleca, R.; Dragan, C.; Dascalescu, L.
2008-12-01
Tribocharging of pulverulent materials in compressed-air devices is a typical multi-factorial process. This paper aims at demonstrating the interest of using the design of experiments methodology in association with virtual instrumentation for quantifying the effects of various process varaibles and of their interactions, as a prerequisite for the development of new tribocharging devices for industrial applications. The study is focused on the tribocharging of PVC powders in compressed-air devices similar to those employed in electrostatic painting. A classical 2 full-factorial design (3 factors at two levels) was employed for conducting the experiments. The response function was the charge/mass ratio of the material collected in a modified Faraday cage, at the exit of the tribocharging device. The charge/mass ratio was found to increase with the injection pressure and the vortex pressure in the tribocharging device, and to decrease with the increasing of the feed rate. In the present study an in-house design of experiments software was employed for statistical analysis of experimental data and validation of the experimental model.
Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet
NASA Astrophysics Data System (ADS)
Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.
2017-03-01
Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Design and validation of the eyesafe ladar testbed (ELT) using the LadarSIM system simulator
NASA Astrophysics Data System (ADS)
Neilsen, Kevin D.; Budge, Scott E.; Pack, Robert T.; Fullmer, R. Rees; Cook, T. Dean
2009-05-01
The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning, single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT constructed, the measured specifications of the system and experimental data captured from the operational sensor were used to validate the behavior of the system as predicted during the design phase. This paper presents the methodology used, and lessons learned from this "design, build, validate" process. Simulated results from the design phase are presented, and these are compared to simulated results using measured system parameters and operational sensor data. The advantages of this simulation-based process are also presented.
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
NASA Astrophysics Data System (ADS)
Firestone, Gabriel; Bochinski, Jason; Meth, Jeffrey; Clarke, Laura
Understanding of the heat transfer characteristics of a polymer during processing is critical to predicting and controlling the resulting properties and has been studied extensively in injection molding. As new methodologies for polymer processing are developed, such as photothermal heating, it is important to build an understanding of how heat transfer properties change under these novel conditions. By combining theoretical and experimental approaches, the thermal properties of photothermally heated polymer films were measured. The key idea is that by measuring the steady state temperature profile of a spot heated polymer film via a fluorescence probe (the temperature versus distance from the heated region) and fitting to a theoretical model, heat transfer coefficients can be extracted. We apply this approach to three different polymer systems, crosslinked epoxy, poly(methyl methacrylate) and poly(ethylene oxide) thin films with a range of thicknesses, under different heating laser intensities and with different resultant temperatures. We will discuss the resultant trends and extension of the model beyond a simple spot heating configuration. Support from National Science Foundation CMMI-1069108 and CMMI-1462966.
Choosing appropriate independent variable in educational experimental research: some errors debunked
NASA Astrophysics Data System (ADS)
Panjaitan, R. L.
2018-03-01
It is found that a number of quantitative research reports of some beginning researchers, especially undergraduate students, tend to ‘merely’ quantitative with not really proper understanding of variables involved in the research. This paper focuses on some mistakes related to independent variable determination in experimental research in education. With literature research methodology, data were gathered from an undergraduate student’s thesis as a single non-human subject. This data analysis resulted some findings, such as misinterpreted variables that should have represented the research question, and unsuitable calculation of determination coefficient due to incorrect independent variable determination. When a researcher misinterprets data as data that could behave as the independent variable but actually it could not, all of the following data processes become pointless. These problems might lead to inaccurate research conclusion. In this paper, the problems were analysed and discussed. To avert the incorrectness in processing data, it is suggested that undergraduate students as beginning researchers have adequate statistics mastery. This study might function as a resource to researchers in education to be aware to and not to redo similar errors.
Scaling of elongation transition thickness during thin-film growth on weakly interacting substrates
NASA Astrophysics Data System (ADS)
Lü, B.; Souqui, L.; Elofsson, V.; Sarakinos, K.
2017-08-01
The elongation transition thickness ( θElong) is a central concept in the theoretical description of thin-film growth dynamics on weakly interacting substrates via scaling relations of θElong with respect to rates of key atomistic film-forming processes. To date, these scaling laws have only been confirmed quantitatively by simulations, while experimental proof has been left ambiguous as it has not been possible to measure θElong. Here, we present a method for determining experimentally θElong for Ag films growing on amorphous SiO2: an archetypical weakly interacting film/substrate system. Our results confirm the theoretically predicted θElong scaling behavior, which then allow us to calculate the rates of adatom diffusion and island coalescence completion, in good agreement with the literature. The methodology presented herein casts the foundation for studying growth dynamics and cataloging atomistic-process rates for a wide range of weakly interacting film/substrate systems. This may provide insights into directed growth of metal films with a well-controlled morphology and interfacial structure on 2D crystals—including graphene and MoS2—for catalytic and nanoelectronic applications.
Karri, Rama Rao; Sahu, J N
2018-01-15
Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Eye-Tracking as a Tool in Process-Oriented Reading Test Validation
ERIC Educational Resources Information Center
Solheim, Oddny Judith; Uppstad, Per Henning
2011-01-01
The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…
A Progressive Damage Methodology for Residual Strength Predictions of Notched Composite Panels
NASA Technical Reports Server (NTRS)
Coats, Timothy W.; Harris, Charles E.
1998-01-01
The translaminate fracture behavior of carbon/epoxy structural laminates with through-penetration notches was investigated to develop a residual strength prediction methodology for composite structures. An experimental characterization of several composite materials systems revealed a fracture resistance behavior that was very similar to the R-curve behavior exhibited by ductile metals. Fractographic examinations led to the postulate that the damage growth resistance was primarily due to fractured fibers in the principal load-carrying plies being bridged by intact fibers of the adjacent plies. The load transfer associated with this bridging mechanism suggests that a progressive damage analysis methodology will be appropriate for predicting the residual strength of laminates with through-penetration notches. A progressive damage methodology developed by the authors was used to predict the initiation and growth of matrix cracks and fiber fracture. Most of the residual strength predictions for different panel widths, notch lengths, and material systems were within about 10% of the experimental failure loads.
New Windows based Color Morphological Operators for Biomedical Image Processing
NASA Astrophysics Data System (ADS)
Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia
2016-04-01
Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.
NASA Astrophysics Data System (ADS)
Fallah-Mehrjardi, Ata; Hidayat, Taufiq; Hayes, Peter C.; Jak, Evgueni
2017-12-01
The majority of primary pyrometallurgical copper making processes involve the formation of two immiscible liquid phases, i.e., matte product and the slag phase. There are significant gaps and discrepancies in the phase equilibria data of the slag and the matte systems due to issues and difficulties in performing the experiments and phase analysis. The present study aims to develop an improved experimental methodology for accurate characterisation of gas/slag/matte/tridymite equilibria in the Cu-Fe-O-S-Si system under controlled atmospheres. The experiments involve high-temperature equilibration of synthetic mixtures on silica substrates in CO/CO2/SO2/Ar atmospheres, rapid quenching of samples into water, and direct composition measurement of the equilibrium phases using Electron Probe X-ray Microanalysis (EPMA). A four-point-test procedure was applied to ensure the achievement of equilibrium, which included the following: (i) investigation of equilibration as a function of time, (ii) assessment of phase homogeneity, (iii) confirmation of equilibrium by approaching from different starting conditions, and (iv) systematic analysis of the reactions specific to the system. An iterative improved experimental methodology was developed using this four-point-test approach to characterize the complex multi-component, multi-phase equilibria with high accuracy and precision. The present study is a part of a broader overall research program on the characterisation of the multi-component (Cu-Fe-O-S-Si-Al-Ca-Mg), multi-phase (gas/slag/matte/metal/solids) systems with minor elements (Pb, Zn, As, Bi, Sn, Sb, Ag, and Au).
Development of economic consequence methodology for process risk analysis.
Zadakbar, Omid; Khan, Faisal; Imtiaz, Syed
2015-04-01
A comprehensive methodology for economic consequence analysis with appropriate models for risk analysis of process systems is proposed. This methodology uses loss functions to relate process deviations in a given scenario to economic losses. It consists of four steps: definition of a scenario, identification of losses, quantification of losses, and integration of losses. In this methodology, the process deviations that contribute to a given accident scenario are identified and mapped to assess potential consequences. Losses are assessed with an appropriate loss function (revised Taguchi, modified inverted normal) for each type of loss. The total loss is quantified by integrating different loss functions. The proposed methodology has been examined on two industrial case studies. Implementation of this new economic consequence methodology in quantitative risk assessment will provide better understanding and quantification of risk. This will improve design, decision making, and risk management strategies. © 2014 Society for Risk Analysis.
Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.
Michaud, J-P; Schoenly, Kenneth G; Moreau, G
2012-01-01
Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.
The role of the PIRT process in identifying code improvements and executing code development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, G.E.; Boyack, B.E.
1997-07-01
In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, wasmore » originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.« less
Reliability Centered Maintenance - Methodologies
NASA Technical Reports Server (NTRS)
Kammerer, Catherine C.
2009-01-01
Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.
Information technology security system engineering methodology
NASA Technical Reports Server (NTRS)
Childs, D.
2003-01-01
A methodology is described for system engineering security into large information technology systems under development. The methodology is an integration of a risk management process and a generic system development life cycle process. The methodology is to be used by Security System Engineers to effectively engineer and integrate information technology security into a target system as it progresses through the development life cycle. The methodology can also be used to re-engineer security into a legacy system.
Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.
2017-12-01
The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
Placement-aware decomposition of a digital standard cells library for double patterning lithography
NASA Astrophysics Data System (ADS)
Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif
2012-11-01
To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.
ERIC Educational Resources Information Center
Brooks, Penelope H.; Baumeister, Alfred A.
1977-01-01
The authors contend that the experimental psychology of mental retardation suffers from methatheoretical and methodological weaknesses, preeminently the failure to consider the ecology of mental retardation. (CL)
Barekati-Goudarzi, Mohamad; Boldor, Dorin; Nde, Divine B
2016-02-01
In-situ transesterification (simultaneous extraction and transesterification) of Chinese tallow tree seeds into methyl esters using a batch microwave system was investigated in this study. A high degree of oil extraction and efficient conversion of oil to biodiesel were found in the proposed range. The process was further optimized in terms of product yields and conversion rates using Doehlert optimization methodology. Based on the experimental results and statistical analysis, the optimal production yield conditions for this process were determined as: catalyst concentration of 1.74wt.%, solvent ratio about 3 (v/w), reaction time of 20min and temperature of 58.1°C. H(+)NMR was used to calculate reaction conversion. All methyl esters produced using this method met ASTM biodiesel quality specifications. Copyright © 2015 Elsevier Ltd. All rights reserved.
An assessment of patient sign-outs conducted by University at Buffalo internal medicine residents.
Wheat, Deirdre; Co, Christopher; Manochakian, Rami; Rich, Ellen
2012-01-01
Internal medicine residents were surveyed regarding patient sign-outs at shift change. Data were used to design and implement interventions aimed at improving sign-out quality. This quasi-experimental project incorporated the Plan, Do, Study, Act methodology. Residents completed an anonymous electronic survey regarding experiences during sign-outs. Survey questions assessed structure, process, and outcome of sign-outs. Analysis of qualitative and quantitative data was performed; interventions were implemented based on survey findings. A total of 120 surveys (89% response) and 115 surveys (83% response) were completed by residents of 4 postgraduate years in response to the first (2008) and second (2009) survey requests, respectively. Approximately 79% of the respondents to the second survey indicated that postintervention sign-out systems were superior to preintervention systems. Results indicated improvement in specific areas of structure, process, and outcome. Survey-based modifications to existing sign-out systems effected measurable quality improvement in structure, process, and outcome.
Solar photoassisted advanced oxidation process of azo dyes.
Prato-Garcia, D; Buitrón, G
2009-01-01
Advanced oxidation processes assisted with natural solar radiation in CPC type reactors (parabolic collector compound), was applied for the degradation of three azo dyes: acid orange (AO7), acid red 151 (AR151) and acid blue 113 (AB113). Fenton, Fenton like and ferrioxalate-type complexes showed to be effective for degrade the azo linkage and moieties in different extensions. Initially, the best dose of reagents (Fe(3 + )-H(2)O(2)) was determined through a factorial experimental design, next, using response surface methodologies, the reagent consumption was reduced up to 40%, maintaining in all cases high decolourisation percentages (>98%) after 60 min. of phototreatment. In this work, it was also studied the effect of concentration changes of the influent between 100-300 mg/L and the operation of the photocatalytic process near neutral conditions (pH 6.0-6.5) by using ferrioxalate type complex (FeOx).