Ding, Yongxia; Zhang, Peili
2018-06-12
Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P < 0.01) compared with the control group. In addition, 92.6% of students in the experimental group expressed satisfaction with the new web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
El-Etriby, Ahmed E.; Abdel-Meguid, Mohamed E.; Hatem, Tarek M.; Bahei-El-Din, Yehia A.
2014-03-01
Ambient vibrations are major source of wasted energy, exploiting properly such vibration can be converted to valuable energy and harvested to power up devices, i.e. electronic devices. Accordingly, energy harvesting using smart structures with active piezoelectric ceramics has gained wide interest over the past few years as a method for converting such wasted energy. This paper provides numerical and experimental analysis of piezoelectric fiber based composites for energy harvesting applications proposing a multi-scale modeling approach coupled with experimental verification. The multi-scale approach suggested to predict the behavior of piezoelectric fiber-based composites use micromechanical model based on Transformation Field Analysis (TFA) to calculate the overall material properties of electrically active composite structure. Capitalizing on the calculated properties, single-phase analysis of a homogeneous structure is conducted using finite element method. The experimental work approach involves running dynamic tests on piezoelectric fiber-based composites to simulate mechanical vibrations experienced by a subway train floor tiles. Experimental results agree well with the numerical results both for static and dynamic tests.
Development of a Lumped Element Circuit Model for Approximation of Dielectric Barrier Discharges
2011-08-01
dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond pulsed DBDs, which have been proposed...species for pulsed direct current (DC) dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond...momentum-based approaches. Given the fundamental differences between the novel pulsed discharge approach and the more conventional momentum-based
Web Based Learning Support for Experimental Design in Molecular Biology.
ERIC Educational Resources Information Center
Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob
An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…
Implementing Reading Strategies Based on Collaborative Learning Approach in an English Class
ERIC Educational Resources Information Center
Suwantharathip, Ornprapat
2015-01-01
The present study investigated the effects of reading strategies based on collaborative learning approach on students' reading comprehension and reading strategy use. The quasi-experimental research study was performed with two groups of students. While the control group was taught in the traditional way, the experimental group received reading…
Web-Based Learning Support for Experimental Design in Molecular Biology: A Top-Down Approach
ERIC Educational Resources Information Center
Aegerter-Wilmsen, Tinri; Hartog, Rob; Bisseling, Ton
2003-01-01
An important learning goal of a molecular biology curriculum is the attainment of a certain competence level in experimental design. Currently, undergraduate students are confronted with experimental approaches in textbooks, lectures and laboratory courses. However, most students do not reach a satisfactory level of competence in the designing of…
An experimental validation of a statistical-based damage detection approach.
DOT National Transportation Integrated Search
2011-01-01
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to : autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and : predicted beha...
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
Experimental modeling of swirl flows in power plants
NASA Astrophysics Data System (ADS)
Shtork, S. I.; Litvinov, I. V.; Gesheva, E. S.; Tsoy, M. A.; Skripkin, S. G.
2018-03-01
The article presents an overview of the methods and approaches to experimental modeling of various thermal and hydropower units - furnaces of pulverized coal boilers and flow-through elements of hydro turbines. The presented modeling approaches based on a combination of experimentation and rapid prototyping of working parts may be useful in optimizing energy equipment to improve safety and efficiency of industrial energy systems.
NASA Astrophysics Data System (ADS)
Scharfenberg, Franz-Josef; Bogner, Franz X.
2011-08-01
Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
Chang, Yaw-Jen; Chang, Cheng-Hao
2016-06-01
Based on the principle of immobilized metal affinity chromatography (IMAC), it has been found that a Ni-Co alloy-coated protein chip is able to immobilize functional proteins with a His-tag attached. In this study, an intelligent computational approach was developed to promote the performance and repeatability of a Ni-Co alloy-coated protein chip. This approach was launched out of L18 experiments. Based on the experimental data, the fabrication process model of a Ni-Co protein chip was established by using an artificial neural network, and then an optimal fabrication condition was obtained using the Taguchi genetic algorithm. The result was validated experimentally and compared with a nitrocellulose chip. Consequentially, experimental outcomes revealed that the Ni-Co alloy-coated chip, fabricated using the proposed approach, had the best performance and repeatability compared with the Ni-Co chips of an L18 orthogonal array design and the nitrocellulose chip. Moreover, the low fluorescent background of the chip surface gives a more precise fluorescent detection. Based on a small quantity of experiments, this proposed intelligent computation approach can significantly reduce the experimental cost and improve the product's quality. © 2015 Society for Laboratory Automation and Screening.
Saive, Anne-Lise; Royet, Jean-Pierre; Plailly, Jane
2014-01-01
Odors are powerful cues that trigger episodic memories. However, in light of the amount of behavioral data describing the characteristics of episodic odor memory, the paucity of information available on the neural substrates of this function is startling. Furthermore, the diversity of experimental paradigms complicates the identification of a generic episodic odor memory network. We conduct a systematic review of the literature depicting the current state of the neural correlates of episodic odor memory in healthy humans by placing a focus on the experimental approaches. Functional neuroimaging data are introduced by a brief characterization of the memory processes investigated. We present and discuss laboratory-based approaches, such as odor recognition and odor associative memory, and autobiographical approaches, such as the evaluation of odor familiarity and odor-evoked autobiographical memory. We then suggest the development of new laboratory-ecological approaches allowing for the controlled encoding and retrieval of specific multidimensional events that could open up new prospects for the comprehension of episodic odor memory and its neural underpinnings. While large conceptual differences distinguish experimental approaches, the overview of the functional neuroimaging findings suggests relatively stable neural correlates of episodic odor memory. PMID:25071494
NASA Astrophysics Data System (ADS)
Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats
2012-01-01
Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.
ERIC Educational Resources Information Center
Esche, Sven K.
2006-01-01
This article presents how Stevens Institute of Technology (SIT) has adopted an Internet-based approach to implement its undergraduate student laboratories. The approach allowed student interaction with the experimental devices from remote locations at any time. Furthermore, it enabled instructors to include demonstrations of sophisticated…
Prediction of Radial Vibration in Switched Reluctance Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, CJ; Fahimi, B
2013-12-01
Origins of vibration in switched reluctance machines (SRMs) are investigated. Accordingly, an input-output model based on the mechanical impulse response of the SRMis developed. The proposed model is derived using an experimental approach. Using the proposed approach, vibration of the stator frame is captured and experimentally verified.
NASA Astrophysics Data System (ADS)
Prabawanto, S.
2018-05-01
This research aims to investigate the enhancement of students’ mathematical self- efficacy through teaching with metacognitive scaffolding approach. This research used a quasi- experimental design with pre-post respon control. The subjects were pre-service elementary school teachers in a state university in Bandung. In this study, there were two groups: experimental and control groups. The experimental group consists of 60 students who acquire teaching mathematics under metacognitive approach, while the control group consists of 58 students who acquire teaching mathematics under direct approach. Students were classified into three categories based on the mathematical prior ability, namely high, middle, and low. Data collection instruments consist of mathematical self-efficacy instruments. By using mean difference test, two conclusions of the research: (1) there is a significant difference in the enhancement of mathematical self-efficacy between the students who attended the course under metacognitive scaffolding approach and students who attended the course under direct approach, and (2) there is no significant interaction effect of teaching approaches and ability level based on the mathematical prior ability toward enhancement of students’ mathematical self-efficacy.
NASA Astrophysics Data System (ADS)
Prabawanto, Sufyani
2017-05-01
This research aims to investigate the enhancement of students' mathematical problem solving through teaching with metacognitive scaffolding approach. This research used a quasi-experimental design with pretest-posttest control. The subjects were pre-service elementary school teachers in a state university in Bandung. In this study, there were two groups: experimental and control groups. The experimental group consists of 60 studentswho acquire teaching mathematicsunder metacognitive scaffolding approach, while the control group consists of 58 studentswho acquire teaching mathematicsunder direct approach. Students were classified into three categories based on the mathematical prior ability, namely high, middle, and low. Data collection instruments consist of mathematical problem solving test instruments. By usingmean difference test, two conclusions of the research:(1) there is a significant difference in the enhancement of mathematical problem solving between the students who attended the course under metacognitive scaffolding approach and students who attended the course under direct approach, and(2) thereis no significant interaction effect of teaching approaches and ability level based on the mathematical prior ability toward enhancement of students' mathematical problem solving.
ERIC Educational Resources Information Center
Kaya, Osman Nafiz; Dogan, Alev; Gokcek, Nur; Kilic, Ziya; Kilic, Esma
2007-01-01
The purpose of this study was to investigate the effects of multiple intelligences (MI) teaching approach on 8th Grade students' achievement in and attitudes toward science. This study used a pretest-posttest control group experimental design. While the experimental group (n=30) was taught a unit on acids and bases using MI teaching approach, the…
2011-09-01
AND EXPERIMENTAL DESIGN ..........................................................................................................31 1...PRIMARY RESERCH QUESTION ............................................................41 C. OBJECTIVE ACHIEVEMENT...Based Outpatient Clinic CPT Cognitive Processing Therapy DISE Distributed Information Systems Experimentation EBT Evidence-Based Treatment GMC
Flexible manipulator control experiments and analysis
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Ozguner, U.; Tzes, A.; Kotnik, P. T.
1987-01-01
Modeling and control design for flexible manipulators, both from an experimental and analytical viewpoint, are described. From the application perspective, an ongoing effort within the laboratory environment at the Ohio State University, where experimentation on a single link flexible arm is underway is described. Several unique features of this study are described here. First, the manipulator arm is slewed by a direct drive dc motor and has a rigid counterbalance appendage. Current experimentation is from two viewpoints: (1) rigid body slewing and vibration control via actuation with the hub motor, and (2) vibration suppression through the use of structure-mounted proof-mass actuation at the tip. Such an application to manipulator control is of interest particularly in design of space-based telerobotic control systems, but has received little attention to date. From an analytical viewpoint, parameter estimation techniques within the closed-loop for self-tuning adaptive control approaches are discussed. Also introduced is a control approach based on output feedback and frequency weighting to counteract effects of spillover in reduced-order model design. A model of the flexible manipulator based on experimental measurements is evaluated for such estimation and control approaches.
Experimental Economics for Teaching the Functioning of Electricity Markets
ERIC Educational Resources Information Center
Guevara-Cedeno, J. Y.; Palma-Behnke, R.; Uribe, R.
2012-01-01
In the field of electricity markets, the development of training tools for engineers has been extremely useful. A novel experimental economics approach based on a computational Web platform of an electricity market is proposed here for the practical teaching of electrical engineering students. The approach is designed to diminish the gap that…
Learning English with an Invisible Teacher: An Experimental Video Approach.
ERIC Educational Resources Information Center
Eisenstein, Miriam; And Others
1987-01-01
Reports on an experimental teaching approach, based on an innovative video series, used in an English-as-a-second-language (ESL) class for beginning learners. The tapes, which focused on students as they learned (with the viewers learning along with them), showed generally favorable results for ESL students. (Author/CB)
Gene Function Hypotheses for the Campylobacter jejuni Glycome Generated by a Logic-Based Approach
Sternberg, Michael J.E.; Tamaddoni-Nezhad, Alireza; Lesk, Victor I.; Kay, Emily; Hitchen, Paul G.; Cootes, Adrian; van Alphen, Lieke B.; Lamoureux, Marc P.; Jarrell, Harold C.; Rawlings, Christopher J.; Soo, Evelyn C.; Szymanski, Christine M.; Dell, Anne; Wren, Brendan W.; Muggleton, Stephen H.
2013-01-01
Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning—the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system. PMID:23103756
Gene function hypotheses for the Campylobacter jejuni glycome generated by a logic-based approach.
Sternberg, Michael J E; Tamaddoni-Nezhad, Alireza; Lesk, Victor I; Kay, Emily; Hitchen, Paul G; Cootes, Adrian; van Alphen, Lieke B; Lamoureux, Marc P; Jarrell, Harold C; Rawlings, Christopher J; Soo, Evelyn C; Szymanski, Christine M; Dell, Anne; Wren, Brendan W; Muggleton, Stephen H
2013-01-09
Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning-the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system. Copyright © 2012 Elsevier Ltd. All rights reserved.
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
NASA Astrophysics Data System (ADS)
Zarrabian, Sina; Belkacemi, Rabie; Babalola, Adeniyi A.
2016-12-01
In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators' output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.
Desynchronization of stochastically synchronized chemical oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snari, Razan; Tinsley, Mark R., E-mail: mark.tinsley@mail.wvu.edu, E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
The Use of Video Feedback in Teaching Process-Approach EFL Writing
ERIC Educational Resources Information Center
Özkul, Sertaç; Ortaçtepe, Deniz
2017-01-01
This experimental study investigated the use of video feedback as an alternative to feedback with correction codes at an institution where the latter was commonly used for teaching process-approach English as a foreign language (EFL) writing. Over a 5-week period, the control and the experimental groups were provided with feedback based on…
Ricks, Samantha L; Alt, Mary
2016-07-01
The purpose of this tutorial is to provide clinicians with a theoretically motivated and evidence-based approach to teaching adjectives to children who struggle with word learning. Given that there are almost no treatment studies to guide this topic, we have synthesized findings from experimental and theoretical literature to come up with a principles-based approach to treatment. We provide a sample lesson plan, incorporating our 3 theoretical principles, and describe the materials chosen and methods used during treatment and assessment. This approach is theoretically motivated, but it needs to be empirically tested.
NASA Astrophysics Data System (ADS)
Ortleb, Sigrun; Seidel, Christian
2017-07-01
In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.
Research of detection depth for graphene-based optical sensor
NASA Astrophysics Data System (ADS)
Yang, Yong; Sun, Jialve; Liu, Lu; Zhu, Siwei; Yuan, Xiaocong
2018-03-01
Graphene-based optical sensors have been developed for research into the biological intercellular refractive index (RI) because they offer greater detection depths than those provided by the surface plasmon resonance technique. In this Letter, we propose an experimental approach for measurement of the detection depth in a graphene-based optical sensor system that uses transparent polydimethylsiloxane layers with different thicknesses. The experimental results show that detection depths of 2.5 μm and 3 μm can be achieved at wavelengths of 532 nm and 633 nm, respectively. These results prove that graphene-based optical sensors can realize long-range RI detection and are thus promising for use as tools in the biological cell detection field. Additionally, we analyze the factors that influence the detection depth and provide a feasible approach for detection depth control based on adjustment of the wavelength and the angle of incidence. We believe that this approach will be useful in RI tomography applications.
Modeling Complex Marine Ecosystems: An Investigation of Two Teaching Approaches with Fifth Graders
ERIC Educational Resources Information Center
Papaevripidou, M.; Constantinou, C. P.; Zacharia, Z. C.
2007-01-01
This study investigated acquisition and transfer of the modeling ability of fifth graders in various domains. Teaching interventions concentrated on the topic of marine ecosystems either through a modeling-based approach or a worksheet-based approach. A quasi-experimental (pre-post comparison study) design was used. The control group (n = 17)…
A Transdisciplinary Approach to Training: Preliminary Research Findings Based on a Case Analysis
ERIC Educational Resources Information Center
Bimpitsos, Christos; Petridou, Eugenia
2012-01-01
Purpose: The purpose of this paper is to discuss the benefits, barriers and challenges of the transdisciplinary approach to training, and to present findings of a case analysis. Design/methodology/approach: The paper is based on the research findings of an experimental training program for Greek local government managers co-funded by the European…
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1998-05-01
Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.
SAW based micro- and acousto-fluidics in biomedicine
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Varadan, Vijay K.
2017-04-01
Protein association starts with random collisions of individual proteins. Multiple collisions and rotational diffusion brings the molecules to a state of orientation. Majority of the protein associations are influenced by electrostatic interactions. To introduce: electrostatic rate enhancement, Brownian dynamics and transient complex theory has been traditionally used. Due to the recent advances in interdisciplinary sciences, an array of molecular assembly methods is being studied. Protein nanostructural assembly and macromolecular crowding are derived from the subsets of biochemistry to study protein-protein interactions and protein self-assembly. This paper tries to investigate the issue of enhancing the protein self-association rate, and bridging the gap between the simulations and experimental results. The methods proposed here include: electrostatic rate enhancement, macromolecular crowing, nanostructural protein assembly, microfluidics based approaches and magnetic force based approaches. Despite the suggestions of several methods, microfluidic and magnetic force based approaches seem to serve the need of protein assembly in a wider scale. Congruence of these approaches may also yield better results. Even though, these methods prove to be conceptually strong, to prevent the disagreement of theory and practice, a wide range of experiments is required. This proposal intends to study theoretical and experimental methods to successfully implement the aforementioned assembly strategies, and conclude with an extensive analysis of experimental data to address practical feasibility.
Cooke, Steven J; Birnie-Gauvin, Kim; Lennox, Robert J; Taylor, Jessica J; Rytwinski, Trina; Rummer, Jodie L; Franklin, Craig E; Bennett, Joseph R; Haddaway, Neal R
2017-01-01
Policy development and management decisions should be based upon the best available evidence. In recent years, approaches to evidence synthesis, originating in the medical realm (such as systematic reviews), have been applied to conservation to promote evidence-based conservation and environmental management. Systematic reviews involve a critical appraisal of evidence, but studies that lack the necessary rigour (e.g. experimental, technical and analytical aspects) to justify their conclusions are typically excluded from systematic reviews or down-weighted in terms of their influence. One of the strengths of conservation physiology is the reliance on experimental approaches that help to more clearly establish cause-and-effect relationships. Indeed, experimental biology and ecology have much to offer in terms of building the evidence base that is needed to inform policy and management options related to pressing issues such as enacting endangered species recovery plans or evaluating the effectiveness of conservation interventions. Here, we identify a number of pitfalls that can prevent experimental findings from being relevant to conservation or would lead to their exclusion or down-weighting during critical appraisal in a systematic review. We conclude that conservation physiology is well positioned to support evidence-based conservation, provided that experimental designs are robust and that conservation physiologists understand the nuances associated with informing decision-making processes so that they can be more relevant.
Birnie-Gauvin, Kim; Lennox, Robert J.; Taylor, Jessica J.; Rytwinski, Trina; Rummer, Jodie L.; Franklin, Craig E.; Bennett, Joseph R.; Haddaway, Neal R.
2017-01-01
Abstract Policy development and management decisions should be based upon the best available evidence. In recent years, approaches to evidence synthesis, originating in the medical realm (such as systematic reviews), have been applied to conservation to promote evidence-based conservation and environmental management. Systematic reviews involve a critical appraisal of evidence, but studies that lack the necessary rigour (e.g. experimental, technical and analytical aspects) to justify their conclusions are typically excluded from systematic reviews or down-weighted in terms of their influence. One of the strengths of conservation physiology is the reliance on experimental approaches that help to more clearly establish cause-and-effect relationships. Indeed, experimental biology and ecology have much to offer in terms of building the evidence base that is needed to inform policy and management options related to pressing issues such as enacting endangered species recovery plans or evaluating the effectiveness of conservation interventions. Here, we identify a number of pitfalls that can prevent experimental findings from being relevant to conservation or would lead to their exclusion or down-weighting during critical appraisal in a systematic review. We conclude that conservation physiology is well positioned to support evidence-based conservation, provided that experimental designs are robust and that conservation physiologists understand the nuances associated with informing decision-making processes so that they can be more relevant. PMID:28835842
Rational-operator-based depth-from-defocus approach to scene reconstruction.
Li, Ang; Staunton, Richard; Tjahjadi, Tardi
2013-09-01
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.
Lin, Tzu-Hsuan; Lu, Yung-Chi; Hung, Shih-Lin
2014-01-01
This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building.
Combinatorial and high-throughput screening of materials libraries: review of state of the art.
Potyrailo, Radislav; Rajan, Krishna; Stoewe, Klaus; Takeuchi, Ichiro; Chisholm, Bret; Lam, Hubert
2011-11-14
Rational materials design based on prior knowledge is attractive because it promises to avoid time-consuming synthesis and testing of numerous materials candidates. However with the increase of complexity of materials, the scientific ability for the rational materials design becomes progressively limited. As a result of this complexity, combinatorial and high-throughput (CHT) experimentation in materials science has been recognized as a new scientific approach to generate new knowledge. This review demonstrates the broad applicability of CHT experimentation technologies in discovery and optimization of new materials. We discuss general principles of CHT materials screening, followed by the detailed discussion of high-throughput materials characterization approaches, advances in data analysis/mining, and new materials developments facilitated by CHT experimentation. We critically analyze results of materials development in the areas most impacted by the CHT approaches, such as catalysis, electronic and functional materials, polymer-based industrial coatings, sensing materials, and biomaterials.
Ab initio structure prediction of silicon and germanium sulfides for lithium-ion battery materials
NASA Astrophysics Data System (ADS)
Hsueh, Connie; Mayo, Martin; Morris, Andrew J.
Conventional experimental-based approaches to materials discovery, which can rely heavily on trial and error, are time-intensive and costly. We discuss approaches to coupling experimental and computational techniques in order to systematize, automate, and accelerate the process of materials discovery, which is of particular relevance to developing new battery materials. We use the ab initio random structure searching (AIRSS) method to conduct a systematic investigation of Si-S and Ge-S binary compounds in order to search for novel materials for lithium-ion battery (LIB) anodes. AIRSS is a high-throughput, density functional theory-based approach to structure prediction which has been successful at predicting the structures of LIBs containing sulfur and silicon and germanium. We propose a lithiation mechanism for Li-GeS2 anodes as well as report new, theoretically stable, layered and porous structures in the Si-S and Ge-S systems that pique experimental interest.
Experimental Learning Enhancing Improvisation Skills
ERIC Educational Resources Information Center
Pereira Christopoulos, Tania; Wilner, Adriana; Trindade Bestetti, Maria Luisa
2016-01-01
Purpose: This study aims to present improvisation training and experimentation as an alternative method to deal with unexpected events in which structured processes do not seem to work. Design/Methodology/Approach: Based on the literature of sensemaking and improvisation, the study designs a framework and process model of experimental learning…
Problem-Based Learning in the English Language Classroom
ERIC Educational Resources Information Center
Othman, Normala; Shah, Mohamed Ismail Ahamad
2013-01-01
The purpose of this study was to investigate the effects of the problem-based learning approach (PBL) on students in language classes in two areas: course content and language development. The study was conducted on 128 students, grouped into the experimental and control groups, and employed an experimental research design. The syllabus, textbook,…
Lu, Xin; Soto, Marcelo A; Thévenaz, Luc
2017-07-10
A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.
ERIC Educational Resources Information Center
Jang, Jeong-yoon; Hand, Brian
2017-01-01
This study investigated the value of using a scaffolded critique framework to promote two different types of writing--argumentative writing and explanatory writing--with different purposes within an argument-based inquiry approach known as the Science Writing Heuristic (SWH) approach. A quasi-experimental design with sixth and seventh grade…
A Case-Based Approach Improves Science Students' Experimental Variable Identification Skills
ERIC Educational Resources Information Center
Grunwald, Sandra; Hartman, Andrew
2010-01-01
Incorporation of experimental case studies into the laboratory curriculum increases students' abilities to identify experimental variables that affect the outcome of an experiment. Here the authors describe how such case studies were incorporated using an online course management system into a biochemistry laboratory curriculum and the assessment…
2017-10-19
consequently, important to obtain relevant experimental data for such short, pin fin channels before finalizing the design of the LN2 microcooler. In the next...must be taken in designing the LD micro pin-fin cooler to reflect these experimental trends. Figure 8: Base Heat Transfer Coefficient vs... Experimental Hybrid Approach Based on Spectral Power Distribution for Quantitative Degradation Analysis of Phosphor Converted LED," Ieee Transactions on
Development of novel therapies for MG: Studies in animal models.
Souroujon, M C; Brenner, T; Fuchs, S
2010-08-01
Experimental myasthenia gravis (MG) in animals, and in particular experimental autoimmune MG in rodents, serves as excellent models to study possible novel therapeutic modalities for MG. The current treatments for MG are based on cholinesterase inhibitors, general immunosuppressants, and corticosteroids, broad immunomodulatory therapies such as plasma exchange or intravenous immunoglobulins (IVIGs), and thymectomy for selected patients. This stresses the need for immunotherapies that would specifically or preferentially suppress the undesirable autoimmune response without widely affecting the entire immune system as most available treatments do. The available animal models for MG enable to perform preclinical studies in which novel therapeutic approaches can be tested. In this review, we describe the different therapeutic approaches that were so far tested in experimental models of MG and discuss their underlying mechanisms of action. These include antigen - acetylcholine receptor (AChR)-dependent treatments aimed at specifically abrogating the humoral and cellular anti-AChR responses as well as immunomodulatory approaches that could be used either alone or in conjunction with antigen-specific treatments or alternatively serve as steroid sparing agents. The antigen-specific treatments are based on fragments or peptides derived from the acetylcholine receptor (AChR) that would theoretically deviate the anti-AChR autoimmune response away from the muscle target or on ways to target AChR-specific T- and B- cell responses or antibodies. The immunomodulatory modalities include cell-based and non-cell-based ways to affect or manipulate key players in the autoimmune process such as regulatory T cells, dendritic cells, cytokine networks, and chemokine and costimulatory signaling as well as complement pathways. We also describe approaches that attempt to affect the cholinergic balance, which is impaired at the neuromuscular junction. In addition to enabling to test the feasibility of novel approaches, experimental MG enables to perform analyses of existing treatment modalities, which cannot be performed in human MG patients. These include studies on the mode of action of various immunosuppressants and on IVIGs. Hopefully, the vast repertoire of therapeutic approaches that are studied in experimental models of MG will pave the way to clinical studies that will eventually improve the management of MG.
ERIC Educational Resources Information Center
Tsai, Chia-Hui; Cheng, Ching-Hsue; Yeh, Duen-Yian; Lin, Shih-Yun
2017-01-01
This study applied a quasi-experimental design to investigate the influence and predictive power of learner motivation for achievement, employing a mobile game-based English learning approach. A system called the Happy English Learning System, integrating learning material into a game-based context, was constructed and installed on mobile devices…
Iqbal, Mohammad Asif; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo
2014-01-01
The gas-liquid partitioning behavior of major odorants (acetic acid, propionic acid, isobutyric acid, n-butyric acid, i-valeric acid, n-valeric acid, hexanoic acid, phenol, p-cresol, indole, skatole, and toluene (as a reference)) commonly found in microbially digested wastewaters was investigated by two experimental approaches. Firstly, a simple vaporization method was applied to measure the target odorants dissolved in liquid samples with the aid of sorbent tube/thermal desorption/gas chromatography/mass spectrometry. As an alternative method, an impinger-based dynamic headspace sampling method was also explored to measure the partitioning of target odorants between the gas and liquid phases with the same detection system. The relative extraction efficiency (in percent) of the odorants by dynamic headspace sampling was estimated against the calibration results derived by the vaporization method. Finally, the concentrations of the major odorants in real digested wastewater samples were also analyzed using both analytical approaches. Through a parallel application of the two experimental methods, we intended to develop an experimental approach to be able to assess the liquid-to-gas phase partitioning behavior of major odorants in a complex wastewater system. The relative sensitivity of the two methods expressed in terms of response factor ratios (RFvap/RFimp) of liquid standard calibration between vaporization and impinger-based calibrations varied widely from 981 (skatole) to 6,022 (acetic acid). Comparison of this relative sensitivity thus highlights the rather low extraction efficiency of the highly soluble and more acidic odorants from wastewater samples in dynamic headspace sampling.
Intuitive web-based experimental design for high-throughput biomedical data.
Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven
2015-01-01
Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.
Predicting protein interactions by Brownian dynamics simulations.
Meng, Xuan-Yu; Xu, Yu; Zhang, Hong-Xing; Mezei, Mihaly; Cui, Meng
2012-01-01
We present a newly adapted Brownian-Dynamics (BD)-based protein docking method for predicting native protein complexes. The approach includes global BD conformational sampling, compact complex selection, and local energy minimization. In order to reduce the computational costs for energy evaluations, a shell-based grid force field was developed to represent the receptor protein and solvation effects. The performance of this BD protein docking approach has been evaluated on a test set of 24 crystal protein complexes. Reproduction of experimental structures in the test set indicates the adequate conformational sampling and accurate scoring of this BD protein docking approach. Furthermore, we have developed an approach to account for the flexibility of proteins, which has been successfully applied to reproduce the experimental complex structure from the structure of two unbounded proteins. These results indicate that this adapted BD protein docking approach can be useful for the prediction of protein-protein interactions.
Variogram-based feature extraction for neural network recognition of logos
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2003-03-01
This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.
ERIC Educational Resources Information Center
Chen, Ching-Huei; Chen, Chia-Ying
2012-01-01
This study examined the effects of an inquiry-based learning (IBL) approach compared to that of a problem-based learning (PBL) approach on learner performance, attitude toward science and inquiry ability. Ninety-six students from three 7th-grade classes at a public school were randomly assigned to two experimental groups and one control group. All…
Live stop-controlled intersection data collection.
DOT National Transportation Integrated Search
2007-01-01
This report describes an experimental investigation performed at live intersections to gather infrastructure-based naturalistic driver approach behavior data. This data was collected and analyzed with the goal of understanding how drivers approach in...
Ahn, Si-Nae; Yoo, Eun-Young; Jung, Min-Ye; Park, Hae-Yean; Lee, Ji-Yeon; Choi, Yoo-Im
2017-01-01
Cognitive Orientation to daily Occupational Performance (CO-OP) approach based on cognitive strategy in occupational therapy. To investigate the effects of CO-OP approach on occupational performance in individuals with hemiparetic stroke. This study was designed as a 5-week, randomized, single-blind. Forty-three participants who had a diagnosis of first stroke were enrolled in this study. The participants were randomly assigned to the experimental group (n = 20) or the control group (n = 23). The experimental group conducted CO-OP approach while the control group conducted conventional occupational therapy based on occupational performance components. This study measured Canadian Occupational Performance Measure (COPM) and Performance Quality Rating Scale (PQRS). Outcome measurements were performed at baseline and post-intervention. After training, the scores of COPM and PQRS in trained task were significantly higher for the score in the experimental group than the control group. In addition, the non-trained task was significantly higher for the score in the experimental group than the control group in COPM and the PQRS. This study suggests that the CO-OP approach is beneficial effects on the occupational performance to improvement in individuals with hemiparetic stroke, and have positive effects on generalization and transfer of acquired skills.
Progressive collapse of a two-story reinforced concrete frame with embedded smart aggregates
NASA Astrophysics Data System (ADS)
Laskar, Arghadeep; Gu, Haichang; Mo, Y. L.; Song, Gangbing
2009-07-01
This paper reports the experimental and analytical results of a two-story reinforced concrete frame instrumented with innovative piezoceramic-based smart aggregates (SAs) and subjected to a monotonic lateral load up to failure. A finite element model of the frame is developed and analyzed using a computer program called Open system for earthquake engineering simulation (OpenSees). The finite element analysis (FEA) is used to predict the load-deformation curve as well as the development of plastic hinges in the frame. The load-deformation curve predicted from FEA matched well with the experimental results. The sequence of development of plastic hinges in the frame is also studied from the FEA results. The locations of the plastic hinges, as obtained from the analysis, were similar to those observed during the experiment. An SA-based approach is also proposed to evaluate the health status of the concrete frame and identify the development of plastic hinges during the loading procedure. The results of the FEA are used to validate the SA-based approach for detecting the locations and occurrence of the plastic hinges leading to the progressive collapse of the frame. The locations and sequential development of the plastic hinges obtained from the SA-based approach corresponds well with the FEA results. The proposed SA-based approach, thus validated using FEA and experimental results, has a great potential to be applied in the health monitoring of large-scale civil infrastructures.
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
NASA Astrophysics Data System (ADS)
Kammer, Daniel C.; Allen, Mathew S.; Mayes, Randy L.
2015-12-01
Experimental-analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. The method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinates to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig-Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. These modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
On the Integration of Remote Experimentation into Undergraduate Laboratories--Pedagogical Approach
ERIC Educational Resources Information Center
Esche, Sven K.
2005-01-01
This paper presents an Internet-based open approach to laboratory instruction. In this article, the author talks about an open laboratory approach using a multi-user multi-device remote facility. This approach involves both the direct contact with the computer-controlled laboratory setup of interest with the students present in the laboratory…
Savaşan, Ayşegül; Çam, Olcay
2017-06-01
People with alcohol dependency have lower self-esteem than controls and when their alcohol use increases, their self-esteem decreases. Coping skills in alcohol related issues are predicted to reduce vulnerability to relapse. It is important to adapt care to individual needs so as to prevent a return to the cycle of alcohol use. The Tidal Model focuses on providing support and services to people who need to live a constructive life. The aim of the randomized study was to determine the effect of the psychiatric nursing approach based on the Tidal Model on coping and self-esteem in people with alcohol dependency. The study was semi-experimental in design with a control group, and was conducted on 36 individuals (18 experimental, 18 control). An experimental and a control group were formed by assigning persons to each group using the stratified randomization technique in the order in which they were admitted to hospital. The Coping Inventory (COPE) and the Coopersmith Self-Esteem Inventory (CSEI) were used as measurement instruments. The measurement instruments were applied before the application and three months after the application. In addition to routine treatment and follow-up, the psychiatric nursing approach based on the Tidal Model was applied to the experimental group in the One-to-One Sessions. The psychiatric nursing approach based on the Tidal Model is an approach which is effective in increasing the scores of people with alcohol dependency in positive reinterpretation and growth, active coping, restraint, emotional social support and planning and reducing their scores in behavioral disengagement. It was seen that self-esteem rose, but the difference from the control group did not reach significance. The psychiatric nursing approach based on the Tidal Model has an effect on people with alcohol dependency in maintaining their abstinence. The results of the study may provide practices on a theoretical basis for improving coping behaviors and self-esteem and facilitating the recovery process of alcohol dependents with implications for mental health nursing. Copyright © 2017 Elsevier Inc. All rights reserved.
The Task-Based Approach in Language Teaching
ERIC Educational Resources Information Center
Sánchez, Aquilino
2004-01-01
The Task-Based Approach (TBA) has gained popularity in the field of language teaching since the last decade of the 20th Century and significant scholars have joined the discussion and increased the amount of analytical studies on the issue. Nevertheless experimental research is poor, and the tendency of some of the scholars is nowadays shifting…
ERIC Educational Resources Information Center
Zhang, Jianwei; Chen, Qi; Sun, Yanquing; Reid, David J.
2004-01-01
Learning support studies involving simulation-based scientific discovery learning have tended to adopt an ad hoc strategies-oriented approach in which the support strategies are typically pre-specified according to learners' difficulties in particular activities. This article proposes a more integrated approach, a triple scheme for learning…
Education Quality in Kazakhstan in the Context of Competence-Based Approach
ERIC Educational Resources Information Center
Nabi, Yskak; Zhaxylykova, Nuriya Ermuhametovna; Kenbaeva, Gulmira Kaparbaevna; Tolbayev, Abdikerim; Bekbaeva, Zeinep Nusipovna
2016-01-01
The background of this paper is to present how education system of Kazakhstan evolved during the last 24 years of independence, highlighting the contemporary transformational processes. We defined the aim to identify the education quality in the context of competence-based approach. Methods: Analysis of references, interviewing, experimental work.…
Åsberg, Dennis; Chutkowski, Marcin; Leśko, Marek; Samuelsson, Jörgen; Kaczmarski, Krzysztof; Fornstedt, Torgny
2017-01-06
Large pressure gradients are generated in ultra-high-pressure liquid chromatography (UHPLC) using sub-2μm particles causing significant temperature gradients over the column due to viscous heating. These pressure and temperature gradients affect retention and ultimately result in important selectivity shifts. In this study, we developed an approach for predicting the retention time shifts due to these gradients. The approach is presented as a step-by-step procedure and it is based on empirical linear relationships describing how retention varies as a function of temperature and pressure and how the average column temperature increases with the flow rate. It requires only four experiments on standard equipment, is based on straightforward calculations, and is therefore easy to use in method development. The approach was rigorously validated against experimental data obtained with a quality control method for the active pharmaceutical ingredient omeprazole. The accuracy of retention time predictions was very good with relative errors always less than 1% and in many cases around 0.5% (n=32). Selectivity shifts observed between omeprazole and the related impurities when changing the flow rate could also be accurately predicted resulting in good estimates of the resolution between critical peak pairs. The approximations which the presented approach are based on were all justified. The retention factor as a function of pressure and temperature was studied in an experimental design while the temperature distribution in the column was obtained by solving the fundamental heat and mass balance equations for the different experimental conditions. We strongly believe that this approach is sufficiently accurate and experimentally feasible for this separation to be a valuable tool when developing a UHPLC method. After further validation with other separation systems, it could become a useful approach in UHPLC method development, especially in the pharmaceutical industry where demands are high for robustness and regulatory oversight. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balanin, A. L.; Boyarinov, V. F.; Glushkov, E. S.
The application of experimental information on measured axial distributions of fission reaction rates for development of 3D numerical models of the ASTRA critical facility taking into account azimuthal asymmetry of the assembly simulating a HTGR with annular core is substantiated. Owing to the presence of the bottom reflector and the absence of the top reflector, the application of 2D models based on experimentally determined buckling is impossible for calculation of critical assemblies of the ASTRA facility; therefore, an alternative approach based on the application of the extrapolated assembly height is proposed. This approach is exemplified by the numerical analysis ofmore » experiments on measurement of efficiency of control rods mockups and protection system (CPS).« less
ERIC Educational Resources Information Center
Hsia, Lu-Ho; Huang, Iwen; Hwang, Gwo-Jen
2016-01-01
In this paper, a web-based peer-assessment approach is proposed for conducting performing arts activities. A peer-assessment system was implemented and applied to a junior high school performing arts course to evaluate the effectiveness of the proposed approach. A total of 163 junior high students were assigned to an experimental group and a…
ERIC Educational Resources Information Center
Halupa, Colleen M.; Caldwell, Benjamin W.
2015-01-01
This quasi-experimental research study evaluated two intact undergraduate engineering statics classes at a private university in Texas. Students in the control group received traditional lecture, readings and homework assignments. Those in the experimental group also were given access to a complete set of online video lectures and videos…
Experimental and AI-based numerical modeling of contaminant transport in porous media
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Mousavi, Shahram; Sadikoglu, Fahreddin; Singh, Vijay P.
2017-10-01
This study developed a new hybrid artificial intelligence (AI)-meshless approach for modeling contaminant transport in porous media. The key innovation of the proposed approach is that both black box and physically-based models are combined for modeling contaminant transport. The effectiveness of the approach was evaluated using experimental and real world data. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) were calibrated to predict temporal contaminant concentrations (CCs), and the effect of noisy and de-noised data on the model performance was evaluated. Then, considering the predicted CCs at test points (TPs, in experimental study) and piezometers (in Myandoab plain) as interior conditions, the multiquadric radial basis function (MQ-RBF), as a meshless approach which solves partial differential equation (PDE) of contaminant transport in porous media, was employed to estimate the CC values at any point within the study area where there was no TP or piezometer. Optimal values of the dispersion coefficient in the advection-dispersion PDE and shape coefficient of MQ-RBF were determined using the imperialist competitive algorithm. In temporal contaminant transport modeling, de-noised data enhanced the performance of ANN and ANFIS methods in terms of the determination coefficient, up to 6 and 5%, respectively, in the experimental study and up to 39 and 18%, respectively, in the field study. Results showed that the efficiency of ANFIS-meshless model was more than ANN-meshless model up to 2 and 13% in the experimental and field studies, respectively.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Flexoelectricity in Nanostructures: Theory, Nanofabrication and Characterization
2017-09-13
public release; distribution is unlimited. Major Goals: The objective of this project is to investigate, theoretically and experimentally , the... experimental approach. Accomplishments: In this report, we investigated the thermal polarization effect where the temperature- dependent dielectric...through an analytical model, which was experimentally verified. Secondly, based on the existence of the converse flexoelectric effect in materials, BST
ERIC Educational Resources Information Center
Demircioglu, Hülya; Ayas, Alipasa; Demircioglu, Gökhan; Özmen, Haluk
2015-01-01
In this study, the effect of the context-based approach on pre-service primary school teachers' understanding of matter and its states and their attitude towards chemistry was investigated. Using a simple experimental design, the study was conducted with 35 pre-service primary school teachers who were exposed to context-based material with…
A microprocessor-based table lookup approach for magnetic bearing linearization
NASA Technical Reports Server (NTRS)
Groom, N. J.; Miller, J. B.
1981-01-01
An approach for producing a linear transfer characteristic between force command and force output of a magnetic bearing actuator without flux biasing is presented. The approach is microprocessor based and uses a table lookup to generate drive signals for the magnetic bearing power driver. An experimental test setup used to demonstrate the feasibility of the approach is described, and test results are presented. The test setup contains bearing elements similar to those used in a laboratory model annular momentum control device.
A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.
Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron
2018-03-28
All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.
Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián
2009-01-01
This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975
Okamoto, Takuma; Sakaguchi, Atsushi
2017-03-01
Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.
Wu, Jinlu
2013-01-01
Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a "mutation" method in a molecular genetics laboratory course. Students could choose to delete, add, reverse, or replace certain steps of the standard protocol to explore questions of interest to them in a given experimental scenario. They wrote experimental proposals to address their rationales and hypotheses for the "mutations"; conducted experiments in parallel, according to both standard and mutated protocols; and then compared and analyzed results to write individual lab reports. Various autonomy-supportive measures were provided in the entire experimental process. Analyses of student work and feedback suggest that students using the MBL approach 1) spend more time discussing experiments, 2) use more scientific inquiry skills, and 3) find the increased autonomy afforded by MBL more enjoyable than do students following regimented instructions in a conventional "cookbook"-style laboratory. Furthermore, the MBL approach does not incur an obvious increase in labor and financial costs, which makes it feasible for easy adaptation and implementation in a large class.
Cutting the Wires: Modularization of Cellular Networks for Experimental Design
Lang, Moritz; Summers, Sean; Stelling, Jörg
2014-01-01
Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264
Design of nucleic acid sequences for DNA computing based on a thermodynamic approach
Tanaka, Fumiaki; Kameda, Atsushi; Yamamoto, Masahito; Ohuchi, Azuma
2005-01-01
We have developed an algorithm for designing multiple sequences of nucleic acids that have a uniform melting temperature between the sequence and its complement and that do not hybridize non-specifically with each other based on the minimum free energy (ΔGmin). Sequences that satisfy these constraints can be utilized in computations, various engineering applications such as microarrays, and nano-fabrications. Our algorithm is a random generate-and-test algorithm: it generates a candidate sequence randomly and tests whether the sequence satisfies the constraints. The novelty of our algorithm is that the filtering method uses a greedy search to calculate ΔGmin. This effectively excludes inappropriate sequences before ΔGmin is calculated, thereby reducing computation time drastically when compared with an algorithm without the filtering. Experimental results in silico showed the superiority of the greedy search over the traditional approach based on the hamming distance. In addition, experimental results in vitro demonstrated that the experimental free energy (ΔGexp) of 126 sequences correlated well with ΔGmin (|R| = 0.90) than with the hamming distance (|R| = 0.80). These results validate the rationality of a thermodynamic approach. We implemented our algorithm in a graphic user interface-based program written in Java. PMID:15701762
A Problem-Based Approach to Elastic Wave Propagation: The Role of Constraints
ERIC Educational Resources Information Center
Fazio, Claudio; Guastella, Ivan; Tarantino, Giovanni
2009-01-01
A problem-based approach to the teaching of mechanical wave propagation, focused on observation and measurement of wave properties in solids and on modelling of these properties, is presented. In particular, some experimental results, originally aimed at measuring the propagation speed of sound waves in metallic rods, are used in order to deepen…
ERIC Educational Resources Information Center
Jenkins, Craig
2015-01-01
This paper is a comparative quantitative evaluation of an approach to teaching poetry in the subject domain of English that employs a "guided discovery" pedagogy using computer-based microworlds. It uses a quasi-experimental design in order to measure performance gains in computational thinking and poetic thinking following a…
ERIC Educational Resources Information Center
Lee, Chien-I; Yang, Ya-Fei; Mai, Shin-Yi
2016-01-01
Web-based peer assessment has been considered an important process for learning. However, students may not offer constructive feedback due to lack of expertise knowledge. Therefore, this study proposed a scaffolded assessment approach accordingly. To evaluate the effectiveness of the proposed approach, the quasi-experimental design was employed to…
Learning Biology through Innovative Curricula: A Comparison of Game- and Nongame-Based Approaches
ERIC Educational Resources Information Center
Sadler, Troy D.; Romine, William L.; Menon, Deepika; Ferdig, Richard E.; Annetta, Leonard
2015-01-01
This study explored student learning in the context of innovative biotechnology curricula and the effects of gaming as a central element of the learning experience. The quasi-experimentally designed study compared learning outcomes between two curricular approaches: One built around a computer-based game, and the other built around a narrative…
A Cue-Based Approach to the Acquisition of Grammatical Gender in Russian
ERIC Educational Resources Information Center
Rodina, Yulia; Westergaard, Marit
2012-01-01
This article discusses the acquisition of gender in Russian, focusing on some exceptional subclasses of nouns that display a mismatch between semantics and morphology. Experimental results from twenty-five Russian-speaking monolinguals (age 2 ; 6-4 ; 0) are presented and, within a cue-based approach to language acquisition, we argue that children…
Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng
2016-01-01
Motivation: Identifying drug–target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug–target interactions of new candidate drugs or targets. Methods: Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. Results: The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. Availability: http://datamining-iip.fudan.edu.cn/service/DrugE-Rank Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307615
Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng
2016-06-15
Identifying drug-target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug-target interactions of new candidate drugs or targets. Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. http://datamining-iip.fudan.edu.cn/service/DrugE-Rank zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Thirunathan, Praveena; Arnz, Patrik; Husny, Joeska; Gianfrancesco, Alessandro; Perdana, Jimmy
2018-03-01
Accurate description of moisture diffusivity is key to precisely understand and predict moisture transfer behaviour in a matrix. Unfortunately, measuring moisture diffusivity is not trivial, especially at low moisture values and/or elevated temperatures. This paper presents a novel experimental procedure to accurately measure moisture diffusivity based on thermogravimetric approach. The procedure is capable to measure diffusivity even at elevated temperatures (>70°C) and low moisture values (>1%). Diffusivity was extracted from experimental data based on "regular regime approach". The approach was tailored to determine diffusivity from thin film and from poly-dispersed powdered samples. Subsequently, measured diffusivity was validated by comparing to available literature data, showing good agreement. Ability of this approach to accurately measure diffusivity at a wider range of temperatures provides better insight on temperature dependency of diffusivity. Thus, this approach can be crucial to ensure good accuracy of moisture transfer description/prediction especially when involving elevated temperatures. Copyright © 2017 Elsevier Ltd. All rights reserved.
Laing, Nigel G
2008-01-01
Currently a multiplicity of experimental approaches to therapy for genetic muscle diseases is being investigated. These include replacement of the missing gene, manipulation of the gene message, repair of the mutation, upregulation of an alternative gene and pharmacological interventions targeting a number of systems. A number of these approaches are in current clinical trials. There is considerable anticipation that perhaps more than one of the approaches will finally prove of clinical benefit, but there are many voices of caution. No matter which approaches might ultimately prove effective, there is a consensus that for most benefit to the patients it will be necessary to start treatment as early as possible. A consensus is also developing that the only way to do this is to implement population-based newborn screening to identify affected children shortly after birth. Population-based newborn screening is currently practised in very few places in the world and it brings with it implications for prevention rather than cure of genetic muscle diseases.
Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G
2012-06-15
An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.
Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar
2017-09-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.
ERIC Educational Resources Information Center
Ilter, Ilhan
2014-01-01
In this research, an experimental study was carried out in social studies 4th grade students to develop students' conceptual achievement and motivation to succeed academically. The study aims to investigate the effectiveness of project-based learning (PBL) in social studies. A quasi-experimental research design (pre- and posttest) was used in the…
Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf
2005-08-15
We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.
Experimental College Physics Course Based on Ausubel's Learning Theory.
ERIC Educational Resources Information Center
Moreira, Marco Antonio
1978-01-01
Compares the Ausubelian approach and the traditional one to the content organization of an introductory course in electromagnetism. States the differences between these approaches in terms of the student's ability to apply, relate, and differentiate electromagnetic concepts. (GA)
Persson, Ann-Sofie; Alderborn, Göran
2018-04-01
The objective was to present a hybrid approach to predict the strength-pressure relationship (SPR) of tablets using common compression parameters and a single measurement of tablet tensile strength. Experimental SPR were derived for six pharmaceutical powders with brittle and ductile properties and compared to predicted SPR based on a three-stage approach. The prediction was based on the Kawakita b -1 parameter and the in-die Heckel yield stress, an estimate of maximal tensile strength, and a parameter proportionality factor α. Three values of α were used to investigate the influence of the parameter on the SPR. The experimental SPR could satisfactorily be described by the three stage model, however for sodium bicarbonate the tensile strength plateau could not be observed experimentally. The shape of the predicted SPR was to a minor extent influenced by the Kawakita b -1 but the width of the linear region was highly influenced by α. An increased α increased the width of the linear region and thus also the maximal predicted tablet tensile strength. Furthermore, the correspondence between experimental and predicted SPR was influenced by the α value and satisfactory predictions were in general obtained for α = 4.1 indicating the predictive potential of the hybrid approach. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
2012-07-01
developed a microscope- based , offset Helmholtz coil system with a custom-designed microcontroller. We have developed a microfabrication approach for...implemented an experimental model system using ferromagnetic beads. We have applied direct and frequency based magnetic fields for controlling magnetotactic...fields. Expanded Accomplishments We have developed a microscope- based , offset Helmholtz coil system with a custom- designed microcontroller. To be
Hypnosis, behavioral theory, and smoking cessation.
Covino, N A; Bottari, M
2001-04-01
Although nicotine replacement and other pharmacological treatments head the list of popular interventions for smoking cessation, approaches based on psychology can also assist smokers. Hypnosis, suggestion, and behavior therapies have been offered to patients and studied experimentally for several decades. Although no single psychological approach has been found to be superior to others, psychological interventions contribute significantly to successful treatment outcome in smoking cessation. This article describes common hypnotic and behavioral approaches to smoking cessation and critically reviews some of the findings from clinical and experimental research studies. The authors also offer suggestions regarding treatment and future research.
Hydrodynamic cavitation: from theory towards a new experimental approach
NASA Astrophysics Data System (ADS)
Lucia, Umberto; Gervino, Gianpiero
2009-09-01
Hydrodynamic cavitation is analysed by a global thermodynamics principle following an approach based on the maximum irreversible entropy variation that has already given promising results for open systems and has been successfully applied in specific engineering problems. In this paper we present a new phenomenological method to evaluate the conditions inducing cavitation. We think this method could be useful in the design of turbo-machineries and related technologies: it represents both an original physical approach to cavitation and an economical saving in planning because the theoretical analysis could allow engineers to reduce the experimental tests and the costs of the design process.
An integrated approach to model strain localization bands in magnesium alloys
NASA Astrophysics Data System (ADS)
Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.
2018-02-01
Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
NASA Astrophysics Data System (ADS)
Guy, N.; Seyedi, D. M.; Hild, F.
2018-06-01
The work presented herein aims at characterizing and modeling fracturing (i.e., initiation and propagation of cracks) in a clay-rich rock. The analysis is based on two experimental campaigns. The first one relies on a probabilistic analysis of crack initiation considering Brazilian and three-point flexural tests. The second one involves digital image correlation to characterize crack propagation. A nonlocal damage model based on stress regularization is used for the simulations. Two thresholds both based on regularized stress fields are considered. They are determined from the experimental campaigns performed on Lower Watrous rock. The results obtained with the proposed approach are favorably compared with the experimental results.
Adaptive identification of vessel's added moments of inertia with program motion
NASA Astrophysics Data System (ADS)
Alyshev, A. S.; Melnikov, V. G.
2018-05-01
In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
ERIC Educational Resources Information Center
Yeldham, Michael
2016-01-01
This quasi-experimental study compared a strategies approach to second language listening instruction with an interactive approach, one combining a roughly equal balance of strategies and bottom-up skills. The participants were lower-intermediate-level Taiwanese university EFL learners, who were taught for 22 hours over one and a half semesters.…
Angular velocity estimation from measurement vectors of star tracker.
Liu, Hai-bo; Yang, Jun-cai; Yi, Wen-jun; Wang, Jiong-qi; Yang, Jian-kun; Li, Xiu-jian; Tan, Ji-chun
2012-06-01
In most spacecraft, there is a need to know the craft's angular rate. Approaches with least squares and an adaptive Kalman filter are proposed for estimating the angular rate directly from the star tracker measurements. In these approaches, only knowledge of the vector measurements and sampling interval is required. The designed adaptive Kalman filter can filter out noise without information of the dynamic model and inertia dyadic. To verify the proposed estimation approaches, simulations based on the orbit data of the challenging minisatellite payload (CHAMP) satellite and experimental tests with night-sky observation are performed. Both the simulations and experimental testing results have demonstrated that the proposed approach performs well in terms of accuracy, robustness, and performance.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.
Gapsys, Vytautas; de Groot, Bert L
2017-12-12
Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .
Experimental and AI-based numerical modeling of contaminant transport in porous media.
Nourani, Vahid; Mousavi, Shahram; Sadikoglu, Fahreddin; Singh, Vijay P
2017-10-01
This study developed a new hybrid artificial intelligence (AI)-meshless approach for modeling contaminant transport in porous media. The key innovation of the proposed approach is that both black box and physically-based models are combined for modeling contaminant transport. The effectiveness of the approach was evaluated using experimental and real world data. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) were calibrated to predict temporal contaminant concentrations (CCs), and the effect of noisy and de-noised data on the model performance was evaluated. Then, considering the predicted CCs at test points (TPs, in experimental study) and piezometers (in Myandoab plain) as interior conditions, the multiquadric radial basis function (MQ-RBF), as a meshless approach which solves partial differential equation (PDE) of contaminant transport in porous media, was employed to estimate the CC values at any point within the study area where there was no TP or piezometer. Optimal values of the dispersion coefficient in the advection-dispersion PDE and shape coefficient of MQ-RBF were determined using the imperialist competitive algorithm. In temporal contaminant transport modeling, de-noised data enhanced the performance of ANN and ANFIS methods in terms of the determination coefficient, up to 6 and 5%, respectively, in the experimental study and up to 39 and 18%, respectively, in the field study. Results showed that the efficiency of ANFIS-meshless model was more than ANN-meshless model up to 2 and 13% in the experimental and field studies, respectively. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Uyanik, Gökhan
2016-01-01
The purpose of this study was to examine the effect of learning cycle approach-based teaching on academic achievement, attitude, motivation and retention at primary school 4th grade science lesson. It was conducted pretest-posttest quasi-experimental design in this study. The study was conducted on a total of 65 students studying in two different…
ERIC Educational Resources Information Center
Fazio, C.; Guastella, I.; Tarantino, G.
2007-01-01
In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the…
Loukova, Galina V; Milov, Alexey A; Vasiliev, Vladimir P; Minkin, Vladimir I
2016-07-21
For metal-based compounds, the ground- and excited-state dipole moments and the difference thereof are, for the first time, obtained both experimentally and theoretically using solvatochromic equations and DFT/B3LYP/QZVP calculations. The approach is suggested to be promising and easily accessible, and can be universal to elucidate the electronic properties of metal-based compounds.
ERIC Educational Resources Information Center
Motohashi, Yutaka; Kaneko, Yoshihiro; Sasaki, Hisanaga
2007-01-01
A community-based intervention study for suicide prevention was conducted in six towns (total population 43,964) in Akita Prefecture of Japan according to a quasi-experimental design to reduce suicide rates in rural towns. Public awareness raising activities using a health promotion approach emphasizing the empowerment of residents and civic…
A Game-Based Learning Approach to Improving Students' Learning Achievements in a Nutrition Course
ERIC Educational Resources Information Center
Yien, Jui-Mei; Hung, Chun-Ming; Hwang, Gwo-Jen; Lin, Yueh-Chiao
2011-01-01
The aim of this study was to explore the influence of applying a game-based learning approach to nutrition education. The quasi-experimental nonequivalent-control group design was adopted in a four-week learning activity. The participants included sixty-six third graders in two classes of an elementary school. One of the classes was assigned to be…
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Avazmohammadi, Reza; Li, David S; Leahy, Thomas; Shih, Elizabeth; Soares, João S; Gorman, Joseph H; Gorman, Robert C; Sacks, Michael S
2018-02-01
Knowledge of the complete three-dimensional (3D) mechanical behavior of soft tissues is essential in understanding their pathophysiology and in developing novel therapies. Despite significant progress made in experimentation and modeling, a complete approach for the full characterization of soft tissue 3D behavior remains elusive. A major challenge is the complex architecture of soft tissues, such as myocardium, which endows them with strongly anisotropic and heterogeneous mechanical properties. Available experimental approaches for quantifying the 3D mechanical behavior of myocardium are limited to preselected planar biaxial and 3D cuboidal shear tests. These approaches fall short in pursuing a model-driven approach that operates over the full kinematic space. To address these limitations, we took the following approach. First, based on a kinematical analysis and using a given strain energy density function (SEDF), we obtained an optimal set of displacement paths based on the full 3D deformation gradient tensor. We then applied this optimal set to obtain novel experimental data from a 1-cm cube of post-infarcted left ventricular myocardium. Next, we developed an inverse finite element (FE) simulation of the experimental configuration embedded in a parameter optimization scheme for estimation of the SEDF parameters. Notable features of this approach include: (i) enhanced determinability and predictive capability of the estimated parameters following an optimal design of experiments, (ii) accurate simulation of the experimental setup and transmural variation of local fiber directions in the FE environment, and (iii) application of all displacement paths to a single specimen to minimize testing time so that tissue viability could be maintained. Our results indicated that, in contrast to the common approach of conducting preselected tests and choosing an SEDF a posteriori, the optimal design of experiments, integrated with a chosen SEDF and full 3D kinematics, leads to a more robust characterization of the mechanical behavior of myocardium and higher predictive capabilities of the SEDF. The methodology proposed and demonstrated herein will ultimately provide a means to reliably predict tissue-level behaviors, thus facilitating organ-level simulations for efficient diagnosis and evaluation of potential treatments. While applied to myocardium, such developments are also applicable to characterization of other types of soft tissues.
Mridula, Meenu R; Nair, Ashalatha S; Kumar, K Satheesh
2018-02-01
In this paper, we compared the efficacy of observation based modeling approach using a genetic algorithm with the regular statistical analysis as an alternative methodology in plant research. Preliminary experimental data on in vitro rooting was taken for this study with an aim to understand the effect of charcoal and naphthalene acetic acid (NAA) on successful rooting and also to optimize the two variables for maximum result. Observation-based modelling, as well as traditional approach, could identify NAA as a critical factor in rooting of the plantlets under the experimental conditions employed. Symbolic regression analysis using the software deployed here optimised the treatments studied and was successful in identifying the complex non-linear interaction among the variables, with minimalistic preliminary data. The presence of charcoal in the culture medium has a significant impact on root generation by reducing basal callus mass formation. Such an approach is advantageous for establishing in vitro culture protocols as these models will have significant potential for saving time and expenditure in plant tissue culture laboratories, and it further reduces the need for specialised background.
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
ERIC Educational Resources Information Center
Elshirbini Abdel-fattah Al-Ashrii, Ismail Ibrahim
2011-01-01
This study aimed to examine the effectiveness of using a suggested program based on integrating the direct and indirect approaches on developing Strategic Competence skills of EFL secondary school students. The study adopted the experimental design. One group was an experimental group (using the suggested program) and another group worked as the…
NASA Astrophysics Data System (ADS)
Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao
2009-03-01
In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.
An Evaluation of Material Properties Using EMA and FEM
NASA Astrophysics Data System (ADS)
Ďuriš, Rastislav; Labašová, Eva
2016-12-01
The main goal of the paper is the determination of material properties from experimentally measured natural frequencies. A combination of two approaches to structural dynamics testing was applied: the experimental measurements of natural frequencies were performed by Experimental Modal Analysis (EMA) and the numerical simulations, were carried out by Finite Element Analysis (FEA). The optimization methods were used to determine the values of density and elasticity modulus of a specimen based on the experimental results.
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
Kammer, Daniel C.; Allen, Matthew S.; Mayes, Randall L.
2015-09-26
An experimental–analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. This method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinatesmore » to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig–Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. Moreover, these modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.« less
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammer, Daniel C.; Allen, Matthew S.; Mayes, Randall L.
An experimental–analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. This method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinatesmore » to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig–Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. Moreover, these modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.« less
An alternative approach based on artificial neural networks to study controlled drug release.
Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C
2004-02-01
An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association
Laboratory Based Case Studies: Closer to the Real World
ERIC Educational Resources Information Center
Dinan, Frank J.
2005-01-01
Case-based laboratories offer students the chance to approximate real science. Based on interesting stories that pose problems requiring experimental solutions, they avoid the cookbook approach characteristic of traditional undergraduate laboratory instruction. Instead, case-based laboratories challenge students to develop, as much as possible,…
Ferber, Julia; Schneider, Gudrun; Havlik, Linda; Heuft, Gereon; Friederichs, Hendrik; Schrewe, Franz-Bernhard; Schulz-Steinel, Andrea; Burgmer, Markus
2014-01-01
To improve the synergy of established methods of teaching, the Department of Psychosomatics and Psychotherapy, University Hospital Münster, developed a web-based elearning tool using video clips of standardized patients. The effect of this blended-learning approach was evaluated. A multiple-choice test was performed by a naive (without the e-learning tool) and an experimental (with the tool) cohort of medical students to test the groups' expertise in psychosomatics. In addition, participants' satisfaction with the new tool was evaluated (numeric rating scale of 0-10). The experimental cohort was more satisfied with the curriculum and more interested in psychosomatics. Furthermore, the experimental cohort scored significantly better in the multiple-choice test. The new tool proved to be an important addition to the classical curriculum as a blended-learning approach which improves students' satisfaction and knowledge in psychosomatics.
Frequency domain surface EMG sensor fusion for estimating finger forces.
Potluri, Chandrasekhar; Kumar, Parmod; Anugolu, Madhavi; Urfer, Alex; Chiu, Steve; Naidu, D; Schoen, Marco P
2010-01-01
Extracting or estimating skeletal hand/finger forces using surface electro myographic (sEMG) signals poses many challenges due to cross-talk, noise, and a temporal and spatially modulated signal characteristics. Normal sEMG measurements are based on single sensor data. In this paper, array sensors are used along with a proposed sensor fusion scheme that result in a simple Multi-Input-Single-Output (MISO) transfer function. Experimental data is used along with system identification to find this MISO system. A Genetic Algorithm (GA) approach is employed to optimize the characteristics of the MISO system. The proposed fusion-based approach is tested experimentally and indicates improvement in finger/hand force estimation.
Rodriguez, Blanca; Carusi, Annamaria; Abi-Gerges, Najah; Ariga, Rina; Britton, Oliver; Bub, Gil; Bueno-Orovio, Alfonso; Burton, Rebecca A B; Carapella, Valentina; Cardone-Noott, Louie; Daniels, Matthew J; Davies, Mark R; Dutta, Sara; Ghetti, Andre; Grau, Vicente; Harmer, Stephen; Kopljar, Ivan; Lambiase, Pier; Lu, Hua Rong; Lyon, Aurore; Minchole, Ana; Muszkiewicz, Anna; Oster, Julien; Paci, Michelangelo; Passini, Elisa; Severi, Stefano; Taggart, Peter; Tinker, Andy; Valentin, Jean-Pierre; Varro, Andras; Wallman, Mikael; Zhou, Xin
2016-09-01
Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.
Wu, Jinlu
2013-01-01
Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a “mutation” method in a molecular genetics laboratory course. Students could choose to delete, add, reverse, or replace certain steps of the standard protocol to explore questions of interest to them in a given experimental scenario. They wrote experimental proposals to address their rationales and hypotheses for the “mutations”; conducted experiments in parallel, according to both standard and mutated protocols; and then compared and analyzed results to write individual lab reports. Various autonomy-supportive measures were provided in the entire experimental process. Analyses of student work and feedback suggest that students using the MBL approach 1) spend more time discussing experiments, 2) use more scientific inquiry skills, and 3) find the increased autonomy afforded by MBL more enjoyable than do students following regimented instructions in a conventional “cookbook”-style laboratory. Furthermore, the MBL approach does not incur an obvious increase in labor and financial costs, which makes it feasible for easy adaptation and implementation in a large class. PMID:24006394
Cutting the wires: modularization of cellular networks for experimental design.
Lang, Moritz; Summers, Sean; Stelling, Jörg
2014-01-07
Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A CNN based neurobiology inspired approach for retinal image quality assessment.
Mahapatra, Dwarikanath; Roy, Pallab K; Sedai, Suman; Garnavi, Rahil
2016-08-01
Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.
Endobiogeny: a global approach to systems biology (part 1 of 2).
Lapraz, Jean-Claude; Hedayat, Kamyar M
2013-01-01
Endobiogeny is a global systems approach to human biology that may offer an advancement in clinical medicine based in scientific principles of rigor and experimentation and the humanistic principles of individualization of care and alleviation of suffering with minimization of harm. Endobiogeny is neither a movement away from modern science nor an uncritical embracing of pre-rational methods of inquiry but a synthesis of quantitative and qualitative relationships reflected in a systems-approach to life and based on new mathematical paradigms of pattern recognition.
Translating standards into practice - one Semantic Web API for Gene Expression.
Deus, Helena F; Prud'hommeaux, Eric; Miller, Michael; Zhao, Jun; Malone, James; Adamusiak, Tomasz; McCusker, Jim; Das, Sudeshna; Rocca Serra, Philippe; Fox, Ronan; Marshall, M Scott
2012-08-01
Sharing and describing experimental results unambiguously with sufficient detail to enable replication of results is a fundamental tenet of scientific research. In today's cluttered world of "-omics" sciences, data standards and standardized use of terminologies and ontologies for biomedical informatics play an important role in reporting high-throughput experiment results in formats that can be interpreted by both researchers and analytical tools. Increasing adoption of Semantic Web and Linked Data technologies for the integration of heterogeneous and distributed health care and life sciences (HCLSs) datasets has made the reuse of standards even more pressing; dynamic semantic query federation can be used for integrative bioinformatics when ontologies and identifiers are reused across data instances. We present here a methodology to integrate the results and experimental context of three different representations of microarray-based transcriptomic experiments: the Gene Expression Atlas, the W3C BioRDF task force approach to reporting Provenance of Microarray Experiments, and the HSCI blood genomics project. Our approach does not attempt to improve the expressivity of existing standards for genomics but, instead, to enable integration of existing datasets published from microarray-based transcriptomic experiments. SPARQL Construct is used to create a posteriori mappings of concepts and properties and linking rules that match entities based on query constraints. We discuss how our integrative approach can encourage reuse of the Experimental Factor Ontology (EFO) and the Ontology for Biomedical Investigations (OBIs) for the reporting of experimental context and results of gene expression studies. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Savelsbergh, Elwin R.; Ferguson-Hessler, Monica G. M.; de Jong, Ton
An approach to teaching problem-solving based on using the computer software Mathematica is applied to the study of electrostatics and is compared with the normal approach to the module. Learning outcomes for both approaches were not significantly different. The experimental course successfully addressed a number of misconceptions. Students in the…
ERIC Educational Resources Information Center
Graeber, Mary
The typical approach to the teaching of an elementary school science methods course for undergraduate students was compared with an experimental approach based upon activities appearing in the Conceptually Oriented Program in Elementary Science (COPES) teacher's guides. The typical approach was characterized by a coverage of many topics and a…
Classification of cancerous cells based on the one-class problem approach
NASA Astrophysics Data System (ADS)
Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert
1996-03-01
One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.
EPR-based material modelling of soils
NASA Astrophysics Data System (ADS)
Faramarzi, Asaad; Alani, Amir M.
2013-04-01
In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
The CTD2 Center at Emory has developed a new NanoLuc®-based protein-fragment complementation assay (NanoPCA) which allows the detection of novel protein-protein interactions (PPI). NanoPCA allows the study of PPI dynamics with reversible interactions. Read the abstract. Experimental Approaches Read the detailed Experimetnal Approaches.
ERIC Educational Resources Information Center
Jack, Gladys Uzezi
2017-01-01
This study investigated the effect of learning cycle constructivist-based approach on secondary schools students' academic achievement and their attitude towards chemistry. The design used was a pre-test, post-test non randomized control group quasi experimental research design. The design consisted of two instructional groups (learning cycle…
The CTD2 Center at Emory has developed a new NanoLuc®-based protein-fragment complementation assay (NanoPCA) which allows the detection of novel protein-protein interactions (PPI). NanoPCA allows the study of PPI dynamics with reversible interactions. Read the abstract. Experimental Approaches Read the detailed Experimetnal Approaches.
ERIC Educational Resources Information Center
Kalkan, Melek; Ersanli, Ercumend
2008-01-01
The aim of this study is to investigate the effects of the marriage enrichment program based on the cognitive-behavioral approach on levels of marital adjustment of individuals. The experimental and control group of this research was totally composed of 30 individuals. A pre-test post-test research model with control group was used in this…
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Nekhay, Olexandr; Arriaza, Manuel; Boerboom, Luc
2009-07-01
The study presents an approach that combined objective information such as sampling or experimental data with subjective information such as expert opinions. This combined approach was based on the Analytic Network Process method. It was applied to evaluate soil erosion risk and overcomes one of the drawbacks of USLE/RUSLE soil erosion models, namely that they do not consider interactions among soil erosion factors. Another advantage of this method is that it can be used if there are insufficient experimental data. The lack of experimental data can be compensated for through the use of expert evaluations. As an example of the proposed approach, the risk of soil erosion was evaluated in olive groves in Southern Spain, showing the potential of the ANP method for modelling a complex physical process like soil erosion.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Piascik, Robert S.; Newman, James C., Jr.
1999-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, C. E.; Piascik, R. S.; Newman, J. C., Jr.
2000-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
NASA Astrophysics Data System (ADS)
Wang, Ten-See
1993-07-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed. Preliminary results of the computed base pressure agreed reasonably well with that of the measurement. Basic base flow features such as the reverse jet, wall jet, recompression shock, and static pressure field in plane of impingement have been captured.
Multi-objective experimental design for (13)C-based metabolic flux analysis.
Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel
2015-10-01
(13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach
Cheein, Fernando Auat; Scaglia, Gustavo
2012-01-01
Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.
Decentralized stormwater management is based on the dispersal of stormwater management practices (SWMP) throughout a watershed to manage stormwater runoff volume and potentially restore natural hydrologic processes. This approach to stormwater management is increasingly popular b...
NASA Astrophysics Data System (ADS)
Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.
2014-09-01
For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.
ERIC Educational Resources Information Center
Sen, Ceylan; Sezen Vekli, Gülsah
2016-01-01
The aim of this study is to determine the influence of inquiry-based teaching approach on pre-service science teachers' laboratory self-efficacy perceptions and scientific process skills. The quasi experimental model with pre-test-post-test control group design was used as an experimental design in this research. The sample of this study included…
Artificial intelligence systems based on texture descriptors for vaccine development.
Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra
2011-02-01
The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.
Optical quantum memory based on electromagnetically induced transparency
Ma, Lijun; Slattery, Oliver
2017-01-01
Electromagnetically induced transparency (EIT) is a promising approach to implement quantum memory in quantum communication and quantum computing applications. In this paper, following a brief overview of the main approaches to quantum memory, we provide details of the physical principle and theory of quantum memory based specifically on EIT. We discuss the key technologies for implementing quantum memory based on EIT and review important milestones, from the first experimental demonstration to current applications in quantum information systems. PMID:28828172
Evaluation of Brazed Joints Using Failure Assessment Diagram
NASA Technical Reports Server (NTRS)
Flom, Yury
2012-01-01
Fitness-for service approach was used to perform structural analysis of the brazed joints consisting of several base metal / filler metal combinations. Failure Assessment Diagrams (FADs) based on tensile and shear stress ratios were constructed and experimentally validated. It was shown that such FADs can provide a conservative estimate of safe combinations of stresses in the brazed joints. Based on this approach, Margins of Safety (MS) of the brazed joints subjected to multi-axial loading conditions can be evaluated..
Optical quantum memory based on electromagnetically induced transparency.
Ma, Lijun; Slattery, Oliver; Tang, Xiao
2017-04-01
Electromagnetically induced transparency (EIT) is a promising approach to implement quantum memory in quantum communication and quantum computing applications. In this paper, following a brief overview of the main approaches to quantum memory, we provide details of the physical principle and theory of quantum memory based specifically on EIT. We discuss the key technologies for implementing quantum memory based on EIT and review important milestones, from the first experimental demonstration to current applications in quantum information systems.
Knowledge-based fragment binding prediction.
Tang, Grace W; Altman, Russ B
2014-04-01
Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening.
Knowledge-based Fragment Binding Prediction
Tang, Grace W.; Altman, Russ B.
2014-01-01
Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971
Conceptual information processing: A robust approach to KBS-DBMS integration
NASA Technical Reports Server (NTRS)
Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond
1987-01-01
Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.
Empirical predictions of hypervelocity impact damage to the space station
NASA Technical Reports Server (NTRS)
Rule, W. K.; Hayashida, K. B.
1991-01-01
A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.
Experimental demonstration of a measurement-based realisation of a quantum channel
NASA Astrophysics Data System (ADS)
McCutcheon, W.; McMillan, A.; Rarity, J. G.; Tame, M. S.
2018-03-01
We introduce and experimentally demonstrate a method for realising a quantum channel using the measurement-based model. Using a photonic setup and modifying the basis of single-qubit measurements on a four-qubit entangled cluster state, representative channels are realised for the case of a single qubit in the form of amplitude and phase damping channels. The experimental results match the theoretical model well, demonstrating the successful performance of the channels. We also show how other types of quantum channels can be realised using our approach. This work highlights the potential of the measurement-based model for realising quantum channels which may serve as building blocks for simulations of realistic open quantum systems.
Ionospheric very low frequency transmitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, Spencer P.
2015-02-15
The theme of this paper is to establish a reliable ionospheric very low frequency (VLF) transmitter, which is also broad band. Two approaches are studied that generate VLF waves in the ionosphere. The first, classic approach employs a ground-based HF heater to directly modulate the high latitude ionospheric, or auroral electrojet. In the classic approach, the intensity-modulated HF heater induces an alternating current in the electrojet, which serves as a virtual antenna to transmit VLF waves. The spatial and temporal variations of the electrojet impact the reliability of the classic approach. The second, beat-wave approach also employs a ground-based HFmore » heater; however, in this approach, the heater operates in a continuous wave mode at two HF frequencies separated by the desired VLF frequency. Theories for both approaches are formulated, calculations performed with numerical model simulations, and the calculations are compared to experimental results. Theory for the classic approach shows that an HF heater wave, intensity-modulated at VLF, modulates the electron temperature dependent electrical conductivity of the ionospheric electrojet, which, in turn, induces an ac electrojet current. Thus, the electrojet becomes a virtual VLF antenna. The numerical results show that the radiation intensity of the modulated electrojet decreases with an increase in VLF radiation frequency. Theory for the beat wave approach shows that the VLF radiation intensity depends upon the HF heater intensity rather than the electrojet strength, and yet this approach can also modulate the electrojet when present. HF heater experiments were conducted for both the intensity modulated and beat wave approaches. VLF radiations were generated and the experimental results confirm the numerical simulations. Theory and experimental results both show that in the absence of the electrojet, VLF radiation from the F-region is generated via the beat wave approach. Additionally, the beat wave approach generates VLF radiations over a larger frequency band than by the modulated electrojet.« less
Sensitivity and systematics of calorimetric neutrino mass experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucciotti, A.; Cremonesi, O.; Ferri, E.
2009-12-16
A large calorimetric neutrino mass experiment using thermal detectors is expected to play a crucial role in the challenge for directly assessing the neutrino mass. We discuss and compare here two approaches for the estimation of the experimental sensitivity of such an experiment. The first method uses an analytic formulation and allows to obtain readily a close estimate over a wide range of experimental configurations. The second method is based on a Montecarlo technique and is more precise and reliable. The Montecarlo approach is then exploited to study some sources of systematic uncertainties peculiar to calorimetric experiments. Finally, the toolsmore » are applied to investigate the optimal experimental configuration of the MARE project.« less
NASA Astrophysics Data System (ADS)
Stindt, A.; Andrade, M. A. B.; Albrecht, M.; Adamowski, J. C.; Panne, U.; Riedel, J.
2014-01-01
A novel method for predictions of the sound pressure distribution in acoustic levitators is based on a matrix representation of the Rayleigh integral. This method allows for a fast calculation of the acoustic field within the resonator. To make sure that the underlying assumptions and simplifications are justified, this approach was tested by a direct comparison to experimental data. The experimental sound pressure distributions were recorded by high spatially resolved frequency selective microphone scanning. To emphasize the general applicability of the two approaches, the comparative studies were conducted for four different resonator geometries. In all cases, the results show an excellent agreement, demonstrating the accuracy of the matrix method.
Raevsky, O; Andreeva, E; Raevskaja, O; Skvortsov, V; Schaper, K
2005-01-01
QSPR analyses of the solubility in water of 558 vapors, 786 liquids and 2045 solid organic neutral chemicals and drugs are presented. Simultaneous consideration of H-bond acceptor and donor factors leads to a good description of the solubility of vapors and liquids. A volume-related term was found to have an essential negative contribution to the solubility of liquids. Consideration of polarizability, H-bond acceptor and donor factors and indicators for a few functional groups, as well as the experimental solubility values of structurally nearest neighbors yielded good correlations for liquids. The application of Yalkowsky's "General Solubility Equation" to 1063 solid chemicals and drugs resulted in a correlation of experimental vs calculated log S values with only modest statistical criteria. Two approaches to derive predictive models for solubility of solid chemicals and drugs were tested. The first approach was based on the QSPR for liquids together with indicator variables for different functional groups. Furthermore, a calculation of enthalpies for intermolecular complexes in crystal lattices, based on new H-bond potentials, was carried out for the better consideration of essential solubility- decreasing effects in the solid state, as compared with the liquid state. The second approach was based on a combination of similarity considerations and traditional QSPR. Both approaches lead to high quality predictions with average absolute errors on the level of experimental log S determination.
Integrated PK-PD and agent-based modeling in oncology.
Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S
2015-04-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.
Integrated PK-PD and Agent-Based Modeling in Oncology
Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio
2016-01-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379
Approaching the Limit in Atomic Spectrochemical Analysis.
ERIC Educational Resources Information Center
Hieftje, Gary M.
1982-01-01
To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
Sexual Violence Prevention through Bystander Education: An Experimental Evaluation
ERIC Educational Resources Information Center
Banyard, Victoria L.; Moynihan, Mary M.; Plante, Elizabethe G.
2007-01-01
The current study used an experimental design to evaluate a sexual violence prevention program based on a community of responsibility model that teaches women and men how to intervene safely and effectively in cases of sexual violence before, during, and after incidents with strangers, acquaintances, or friends. It approaches both women and men as…
Instream-Flow Analysis for the Luquillo Experimental Forest, Puerto Rico: Methods and Analysis
F.N. Scatena; S.L. Johnson
2001-01-01
This study develops two habitat-based approaches for evaluating instream-flow requirements within the Luquillo Experimental Forest in northeastern Puerto Rico. The analysis is restricted to instream-flow requirements in upland streams dominated by the common communities of freshwater decapods. In headwater streams, pool volume was the most consistent factor...
The use of experimental data in an MTR-type nuclear reactor safety analysis
NASA Astrophysics Data System (ADS)
Day, Simon E.
Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type ( i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core.
Wu, Zujian; Pang, Wei; Coghill, George M
Computational modelling of biochemical systems based on top-down and bottom-up approaches has been well studied over the last decade. In this research, after illustrating how to generate atomic components by a set of given reactants and two user pre-defined component patterns, we propose an integrative top-down and bottom-up modelling approach for stepwise qualitative exploration of interactions among reactants in biochemical systems. Evolution strategy is applied to the top-down modelling approach to compose models, and simulated annealing is employed in the bottom-up modelling approach to explore potential interactions based on models constructed from the top-down modelling process. Both the top-down and bottom-up approaches support stepwise modular addition or subtraction for the model evolution. Experimental results indicate that our modelling approach is feasible to learn the relationships among biochemical reactants qualitatively. In addition, hidden reactants of the target biochemical system can be obtained by generating complex reactants in corresponding composed models. Moreover, qualitatively learned models with inferred reactants and alternative topologies can be used for further web-lab experimental investigations by biologists of interest, which may result in a better understanding of the system.
A fast method for optical simulation of flood maps of light-sharing detector modules.
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W; Peng, Qiyu
2015-12-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200-600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.
Supervised Learning Based Hypothesis Generation from Biomedical Literature.
Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei
2015-01-01
Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.
NASA Astrophysics Data System (ADS)
Saadeddin, Kamal; Abdel-Hafez, Mamoun F.; Jaradat, Mohammad A.; Jarrah, Mohammad Amin
2013-12-01
In this paper, a low-cost navigation system that fuses the measurements of the inertial navigation system (INS) and the global positioning system (GPS) receiver is developed. First, the system's dynamics are obtained based on a vehicle's kinematic model. Second, the INS and GPS measurements are fused using an extended Kalman filter (EKF) approach. Subsequently, an artificial intelligence based approach for the fusion of INS/GPS measurements is developed based on an Input-Delayed Adaptive Neuro-Fuzzy Inference System (IDANFIS). Experimental tests are conducted to demonstrate the performance of the two sensor fusion approaches. It is found that the use of the proposed IDANFIS approach achieves a reduction in the integration development time and an improvement in the estimation accuracy of the vehicle's position and velocity compared to the EKF based approach.
van Steenbergen, Henk; Bocanegra, Bruno R
2016-12-01
In a recent letter, Plant (2015) reminded us that proper calibration of our laboratory experiments is important for the progress of psychological science. Therefore, carefully controlled laboratory studies are argued to be preferred over Web-based experimentation, in which timing is usually more imprecise. Here we argue that there are many situations in which the timing of Web-based experimentation is acceptable and that online experimentation provides a very useful and promising complementary toolbox to available lab-based approaches. We discuss examples in which stimulus calibration or calibration against response criteria is necessary and situations in which this is not critical. We also discuss how online labor markets, such as Amazon's Mechanical Turk, allow researchers to acquire data in more diverse populations and to test theories along more psychological dimensions. Recent methodological advances that have produced more accurate browser-based stimulus presentation are also discussed. In our view, online experimentation is one of the most promising avenues to advance replicable psychological science in the near future.
Analysing photonic structures in plants
Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J.; Steiner, Ullrich
2013-01-01
The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence. PMID:23883949
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
ERIC Educational Resources Information Center
Memis, Esra Kabatas
2016-01-01
The aim of this study was to investigate the effect of the university-level application of an Argument-Based Inquiry Approach, as compared to the traditional laboratory teaching method, on the ability of students to learn about optics and to demonstrate critical thinking. In this quasi-experimental study, pretest-posttest scores and CCDTI were…
ERIC Educational Resources Information Center
Demircioglu, Hulya; Demircioglu, Gokhan; Calik, Muammar
2009-01-01
We investigated the effect of the context-based approach on 9th grade students' conceptions of the Periodic Table. Within a nonequivalent pretest-posttest control group design the study was conducted with 80 grade 9 students (aged 15-16) drawn from two classes (39 and 41 students) in a high school in Turkey. The experimental group was exposed to…
Syta, A; Bowen, C R; Kim, H A; Rysak, A; Litak, G
The use of bistable laminates is a potential approach to realize broadband piezoelectric based energy harvesting systems. In this paper the dynamic response of a piezoelectric material attached to a bistable laminate plate is examined based on the experimental generated voltage time series. The system was subjected to harmonic excitations and exhibited single-well and snap-through vibrations of both periodic and chaotic character. To identify the dynamics of the system response we examined the frequency spectrum, bifurcation diagrams, phase portraits, and the 0-1 test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
Modeling changes in biomass composition during microwave-based alkali pretreatment of switchgrass.
Keshwani, Deepak R; Cheng, Jay J
2010-01-01
This study used two different approaches to model changes in biomass composition during microwave-based pretreatment of switchgrass: kinetic modeling using a time-dependent rate coefficient, and a Mamdani-type fuzzy inference system. In both modeling approaches, the dielectric loss tangent of the alkali reagent and pretreatment time were used as predictors for changes in amounts of lignin, cellulose, and xylan during the pretreatment. Training and testing data sets for development and validation of the models were obtained from pretreatment experiments conducted using 1-3% w/v NaOH (sodium hydroxide) and pretreatment times ranging from 5 to 20 min. The kinetic modeling approach for lignin and xylan gave comparable results for training and testing data sets, and the differences between the predictions and experimental values were within 2%. The kinetic modeling approach for cellulose was not as effective, and the differences were within 5-7%. The time-dependent rate coefficients of the kinetic models estimated from experimental data were consistent with the heterogeneity of individual biomass components. The Mamdani-type fuzzy inference was shown to be an effective approach to model the pretreatment process and yielded predictions with less than 2% deviation from the experimental values for lignin and with less than 3% deviation from the experimental values for cellulose and xylan. The entropies of the fuzzy outputs from the Mamdani-type fuzzy inference system were calculated to quantify the uncertainty associated with the predictions. Results indicate that there is no significant difference between the entropies associated with the predictions for lignin, cellulose, and xylan. It is anticipated that these models could be used in process simulations of bioethanol production from lignocellulosic materials.
Zou, Cunlu; Ladroue, Christophe; Guo, Shuixia; Feng, Jianfeng
2010-06-21
Reverse-engineering approaches such as Bayesian network inference, ordinary differential equations (ODEs) and information theory are widely applied to deriving causal relationships among different elements such as genes, proteins, metabolites, neurons, brain areas and so on, based upon multi-dimensional spatial and temporal data. There are several well-established reverse-engineering approaches to explore causal relationships in a dynamic network, such as ordinary differential equations (ODE), Bayesian networks, information theory and Granger Causality. Here we focused on Granger causality both in the time and frequency domain and in local and global networks, and applied our approach to experimental data (genes and proteins). For a small gene network, Granger causality outperformed all the other three approaches mentioned above. A global protein network of 812 proteins was reconstructed, using a novel approach. The obtained results fitted well with known experimental findings and predicted many experimentally testable results. In addition to interactions in the time domain, interactions in the frequency domain were also recovered. The results on the proteomic data and gene data confirm that Granger causality is a simple and accurate approach to recover the network structure. Our approach is general and can be easily applied to other types of temporal data.
Evans, Travis C; Britton, Jennifer C
2018-09-01
Abnormal threat-related attention in anxiety disorders is most commonly assessed and modified using the dot-probe paradigm; however, poor psychometric properties of reaction-time measures may contribute to inconsistencies across studies. Typically, standard attention measures are derived using average reaction-times obtained in experimentally-defined conditions. However, current approaches based on experimentally-defined conditions are limited. In this study, the psychometric properties of a novel response-based computation approach to analyze dot-probe data are compared to standard measures of attention. 148 adults (19.19 ± 1.42 years, 84 women) completed a standardized dot-probe task including threatening and neutral faces. We generated both standard and response-based measures of attention bias, attentional orientation, and attentional disengagement. We compared overall internal consistency, number of trials necessary to reach internal consistency, test-retest reliability (n = 72), and criterion validity obtained using each approach. Compared to standard attention measures, response-based measures demonstrated uniformly high levels of internal consistency with relatively few trials and varying improvements in test-retest reliability. Additionally, response-based measures demonstrated specific evidence of anxiety-related associations above and beyond both standard attention measures and other confounds. Future studies are necessary to validate this approach in clinical samples. Response-based attention measures demonstrate superior psychometric properties compared to standard attention measures, which may improve the detection of anxiety-related associations and treatment-related changes in clinical samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effectiveness of inquiry-based learning in an undergraduate exercise physiology course.
Nybo, Lars; May, Michael
2015-06-01
The present study was conducted to investigate the effects of changing a laboratory physiology course for undergraduate students from a traditional step-by-step guided structure to an inquiry-based approach. With this aim in mind, quantitative and qualitative evaluations of learning outcomes (individual subject-specific tests and group interviews) were performed for a laboratory course in cardiorespiratory exercise physiology that was conducted in one year with a traditional step-by-step guided manual (traditional course) and the next year completed with an inquiry-based structure (I-based course). The I-based course was a guided inquiry course where students had to design the experimental protocol and conduct their own study on the basis of certain predefined criteria (i.e., they should evaluate respiratory responses to submaximal and maximal exercise and provide indirect and direct measures of aerobic exercise capacity). The results indicated that the overall time spent on the experimental course as well as self-evaluated learning outcomes were similar across groups. However, students in the I-based course used more time in preparation (102 ± 5 min) than students in the traditional course (42 ± 3 min, P < 0.05), and 65 ± 5% students in the I-based course searched for additional literature before experimentation compared with only 2 ± 1% students in the traditional course. Furthermore, students in the I-based course achieved a higher (P < 0.05) average score on the quantitative test (45 ± 3%) compared with students in the traditional course (31 ± 4%). Although students were unfamiliar with cardiorespiratory exercise physiology and the experimental methods before the course, it appears that an inquiry-based approach rather than one that provides students with step-by-step instructions may benefit learning outcomes in a laboratory physiology course. Copyright © 2015 The American Physiological Society.
An approach to achieve progress in spacecraft shielding
NASA Astrophysics Data System (ADS)
Thoma, K.; Schäfer, F.; Hiermaier, S.; Schneider, E.
2004-01-01
Progress in shield design against space debris can be achieved only when a combined approach based on several tools is used. This approach depends on the combined application of advanced numerical methods, specific material models and experimental determination of input parameters for these models. Examples of experimental methods for material characterization are given, covering the range from quasi static to very high strain rates for materials like Nextel and carbon fiber-reinforced materials. Mesh free numerical methods have extraordinary capabilities in the simulation of extreme material behaviour including complete failure with phase changes, combined with shock wave phenomena and the interaction with structural components. In this paper the benefits from combining numerical methods, material modelling and detailed experimental studies for shield design are demonstrated. The following examples are given: (1) Development of a material model for Nextel and Kevlar-Epoxy to enable numerical simulation of hypervelocity impacts on complex heavy protection shields for the International Space Station. (2) The influence of projectile shape on protection performance of Whipple Shields and how experimental problems in accelerating such shapes can be overcome by systematic numerical simulation. (3) The benefits of using metallic foams in "sandwich bumper shields" for spacecraft and how to approach systematic characterization of such materials.
Technology Project Learning versus Lab Experimentation
ERIC Educational Resources Information Center
Waks, S.; Sabag, N.
2004-01-01
The Project-Based Learning (PBL) approach enables the student to construct knowledge in his/her own way. Piaget, the founder of constructivism, saw the development of intelligence as a process involving the relationship between brain maturity and individual experience. The technology PBL (TPBL) approach confronts the student with a personal…
An examination of fuel particle heating during fire spread
Jack D. Cohen; Mark A. Finney
2010-01-01
Recent high intensity wildfires and our demonstrated inability to control extreme fire behavior suggest a need for alternative approaches for preventing wildfire disasters. Current fire spread models are not sufficiently based on a basic understanding of fire spread processes to provide more effective management alternatives. An experimental and theoretical approach...
Inside and outside: Teacher-Researcher Collaboration
ERIC Educational Resources Information Center
Herrenkohl, Leslie Rupert; Kawasaki, Keiko; DeWater, Lezlie Salvatore
2010-01-01
In this paper, we discuss our approach to teacher-researcher collaboration and how it is similar and different from other models of teacher collaboration. Our approach to collaboration employed design experimentation (Brown, 1992; Design Based Research Collective, 2003) as a central method since it yields important findings for teachers'…
Assessing Financial Education Methods: Principles vs. Rules-of-Thumb Approaches
ERIC Educational Resources Information Center
Skimmyhorn, William L.; Davies, Evan R.; Mun, David; Mitchell, Brian
2016-01-01
Despite thousands of programs and tremendous public and private interest in improving financial decision-making, little is known about how best to teach financial education. Using an experimental approach, the authors estimated the effects of two different education methodologies (principles-based and rules-of-thumb) on the knowledge,…
Hmiel, A.; Winey, J. M.; Gupta, Y. M.; ...
2016-05-23
Accurate theoretical calculations of the nonlinear elastic response of strong solids (e.g., diamond) constitute a fundamental and important scientific need for understanding the response of such materials and for exploring the potential synthesis and design of novel solids. However, without corresponding experimental data, it is difficult to select between predictions from different theoretical methods. Recently the complete set of third-order elastic constants (TOECs) for diamond was determined experimentally, and the validity of various theoretical approaches to calculate the same may now be assessed. We report on the use of density functional theory (DFT) methods to calculate the six third-order elasticmore » constants of diamond. Two different approaches based on homogeneous deformations were used: (1) an energy-strain fitting approach using a prescribed set of deformations, and (2) a longitudinal stress-strain fitting approach using uniaxial compressive strains along the [100], [110], and [111] directions, together with calculated pressure derivatives of the second-order elastic constants. The latter approach provides a direct comparison to the experimental results. The TOECs calculated using the energy-strain approach differ significantly from the measured TOECs. In contrast, calculations using the longitudinal stress-uniaxial strain approach show good agreement with the measured TOECs and match the experimental values significantly better than the TOECs reported in previous theoretical studies. Lastly, our results on diamond have demonstrated that, with proper analysis procedures, first-principles calculations can indeed be used to accurately calculate the TOECs of strong solids.« less
Injecting Inquiry-Oriented Modules into Calculus
ERIC Educational Resources Information Center
Shelton, Therese
2017-01-01
Implementing inquiry-based modules within a course can be effective and enable instructor experimentation, without completely transforming an entire course. For instructors new to inquiry-based learning (IBL), we state hallmarks of the practice and point out the merits of strong IBL communities. An inquiry-based approach may alleviate some current…
Unsupervised chunking based on graph propagation from bilingual corpus.
Zhu, Ling; Wong, Derek F; Chao, Lidia S
2014-01-01
This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.
Analysis of the Temperature and Strain-Rate Dependences of Strain Hardening
NASA Astrophysics Data System (ADS)
Kreyca, Johannes; Kozeschnik, Ernst
2018-01-01
A classical constitutive modeling-based Ansatz for the impact of thermal activation on the stress-strain response of metallic materials is compared with the state parameter-based Kocks-Mecking model. The predicted functional dependencies suggest that, in the first approach, only the dislocation storage mechanism is a thermally activated process, whereas, in the second approach, only the mechanism of dynamic recovery is. In contradiction to each of these individual approaches, our analysis and comparison with experimental evidence shows that thermal activation contributes both to dislocation generation and annihilation.
A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes
Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung
2016-01-01
Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of data from experimental microarrays and simulation studies, the proposed model-based approach was shown to provide a more powerful result than the naïve approach and the hierarchical approach. Since our approach is model-based, it is very flexible and can easily handle different types of covariates. PMID:26964035
Lucklum, Ralf; Zubtsov, Mikhail; Schmidt, Marc-Peter; Mukhin, Nikolay V.; Hirsch, Soeren
2017-01-01
The current work demonstrates a novel surface acoustic wave (SAW) based phononic crystal sensor approach that allows the integration of a velocimetry-based sensor concept into single chip integrated solutions, such as Lab-on-a-Chip devices. The introduced sensor platform merges advantages of ultrasonic velocimetry analytic systems and a microacoustic sensor approach. It is based on the analysis of structural resonances in a periodic composite arrangement of microfluidic channels confined within a liquid analyte. Completed theoretical and experimental investigations show the ability to utilize periodic structure localized modes for the detection of volumetric properties of liquids and prove the efficacy of the proposed sensor concept. PMID:28946609
Oseev, Aleksandr; Lucklum, Ralf; Zubtsov, Mikhail; Schmidt, Marc-Peter; Mukhin, Nikolay V; Hirsch, Soeren
2017-09-23
The current work demonstrates a novel surface acoustic wave (SAW) based phononic crystal sensor approach that allows the integration of a velocimetry-based sensor concept into single chip integrated solutions, such as Lab-on-a-Chip devices. The introduced sensor platform merges advantages of ultrasonic velocimetry analytic systems and a microacoustic sensor approach. It is based on the analysis of structural resonances in a periodic composite arrangement of microfluidic channels confined within a liquid analyte. Completed theoretical and experimental investigations show the ability to utilize periodic structure localized modes for the detection of volumetric properties of liquids and prove the efficacy of the proposed sensor concept.
Distributed memory parallel Markov random fields using graph partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, C.; Perciano, T.; Ushizima, D.
Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less
A Hybrid Approach to Clinical Question Answering
2014-11-01
participation in TREC, we submitted a single run using a hybrid Natural Language Processing ( NLP )-driven approach to accomplish the given task. Evaluation re...for the CDS track uses a variety of NLP - based techniques to address the clinical questions provided. We present a description of our approach, and...discuss our experimental setup, results and eval- uation in the subsequent sections. 2 Description of Our Approach Our hybrid NLP -driven method presents a
Dementia Research: Populations, Progress, Problems, and Predictions.
Hunter, Sally; Smailagic, Nadja; Brayne, Carol
2018-05-16
Alzheimer's disease (AD) is a clinicopathologically defined syndrome leading to cognitive impairment. Following the recent failures of amyloid-based randomized controlled trials to change the course of AD, there are growing calls for a re-evaluation of basic AD research. Epidemiology offers one approach to integrating the available evidence. Here we examine relationships between evidence from population-based, clinicopathological studies of brain aging and a range of hypotheses from all areas of AD research. We identify various problems, including a lack of systematic approach to measurement of clinical and neuropathological factors associated with dementia in experimental and clinical settings, poor understanding of the strengths and weaknesses of different observational and experimental designs, a lack of clarity in relation to disease definitions from the clinical, neuropathological, and molecular perspectives, inadequate characterization of brain aging in the human population, difficulties in translation between laboratory-based and population-based evidence bases, and a lack of communication between different sections of the dementia research community. Population studies highlight complexity and predict that therapeutic approaches based on single disease features will not be successful. Better characterization of brain aging in the human population is urgently required to select biomarkers and therapeutic targets that are meaningful to human disease. The generation of detailed and reliable evidence must be addressed before progress toward therapeutic interventions can be made.
Nonlinear Reduced-Order Simulation Using An Experimentally Guided Modal Basis
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2012-01-01
A procedure is developed for using nonlinear experimental response data to guide the modal basis selection in a nonlinear reduced-order simulation. The procedure entails using nonlinear acceleration response data to first identify proper orthogonal modes. Special consideration is given to cases in which some of the desired response data is unavailable. Bases consisting of linear normal modes are then selected to best represent the experimentally determined transverse proper orthogonal modes and either experimentally determined inplane proper orthogonal modes or the special case of numerically computed in-plane companions. The bases are subsequently used in nonlinear modal reduction and dynamic response simulations. The experimental data used in this work is simulated to allow some practical considerations, such as the availability of in-plane response data and non-idealized test conditions, to be explored. Comparisons of the nonlinear reduced-order simulations are made with the surrogate experimental data to demonstrate the effectiveness of the approach.
Xiao, Ruiyang; Gao, Lingwei; Wei, Zongsu; Spinney, Richard; Luo, Shuang; Wang, Donghong; Dionysiou, Dionysios D; Tang, Chong-Jian; Yang, Weichun
2017-12-01
Advanced oxidation processes (AOPs) based on formation of free radicals at ambient temperature and pressure are effective for treating endocrine disrupting chemicals (EDCs) in waters. In this study, we systematically investigated the degradation kinetics of bisphenol A (BPA), a representative EDC by hydroxyl radical (OH) with a combination of experimental and theoretical approaches. The second-order rate constant (k) of BPA with OH was experimentally determined to be 7.2 ± 0.34 × 10 9 M -1 s -1 at pH 7.55. We also calculated the thermodynamic and kinetic behaviors for the bimolecular reactions by density functional theory (DFT) using the M05-2X method with 6-311++G** basis set and solvation model based on density (SMD). The results revealed that H-abstraction on the phenol group is the most favorable pathway for OH. The theoretical k value corrected by the Collins-Kimball approach was determined to be 1.03 × 10 10 M -1 s -1 , which is in reasonable agreement with the experimental observation. These results are of fundamental and practical importance in understanding the chemical interactions between OH and BPA, and aid further AOPs design in treating EDCs during wastewater treatment processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tam, Jun Hui; Ong, Zhi Chao; Ismail, Zubaidah; Ang, Bee Chin; Khoo, Shin Yee
2018-05-01
The demand for composite materials is increasing due to their great superiority in material properties, e.g., lightweight, high strength and high corrosion resistance. As a result, the invention of composite materials of diverse properties is becoming prevalent, and thus, leading to the development of material identification methods for composite materials. Conventional identification methods are destructive, time-consuming and costly. Therefore, an accurate identification approach is proposed to circumvent these drawbacks, involving the use of Frequency Response Function (FRF) error function defined by the correlation discrepancy between experimental and Finite-Element generated FRFs. A square E-glass epoxy composite plate is investigated under several different configurations of boundary conditions. It is notable that the experimental FRFs are used as the correlation reference, such that, during computation, the predicted FRFs are continuously updated with reference to the experimental FRFs until achieving a solution. The final identified elastic properties, namely in-plane elastic moduli, Ex and Ey, in-plane shear modulus, Gxy, and major Poisson's ratio, vxy of the composite plate are subsequently compared to the benchmark parameters as well as with those obtained using modal-based approach. As compared to the modal-based approach, the proposed method is found to have yielded relatively better results. This can be explained by the direct employment of raw data in the proposed method that avoids errors that might incur during the stage of modal extraction.
Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.
Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian
2018-06-01
This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.
Reproducibility and quantitation of amplicon sequencing-based detection
Zhou, Jizhong; Wu, Liyou; Deng, Ye; Zhi, Xiaoyang; Jiang, Yi-Huei; Tu, Qichao; Xie, Jianping; Van Nostrand, Joy D; He, Zhili; Yang, Yunfeng
2011-01-01
To determine the reproducibility and quantitation of the amplicon sequencing-based detection approach for analyzing microbial community structure, a total of 24 microbial communities from a long-term global change experimental site were examined. Genomic DNA obtained from each community was used to amplify 16S rRNA genes with two or three barcode tags as technical replicates in the presence of a small quantity (0.1% wt/wt) of genomic DNA from Shewanella oneidensis MR-1 as the control. The technical reproducibility of the amplicon sequencing-based detection approach is quite low, with an average operational taxonomic unit (OTU) overlap of 17.2%±2.3% between two technical replicates, and 8.2%±2.3% among three technical replicates, which is most likely due to problems associated with random sampling processes. Such variations in technical replicates could have substantial effects on estimating β-diversity but less on α-diversity. A high variation was also observed in the control across different samples (for example, 66.7-fold for the forward primer), suggesting that the amplicon sequencing-based detection approach could not be quantitative. In addition, various strategies were examined to improve the comparability of amplicon sequencing data, such as increasing biological replicates, and removing singleton sequences and less-representative OTUs across biological replicates. Finally, as expected, various statistical analyses with preprocessed experimental data revealed clear differences in the composition and structure of microbial communities between warming and non-warming, or between clipping and non-clipping. Taken together, these results suggest that amplicon sequencing-based detection is useful in analyzing microbial community structure even though it is not reproducible and quantitative. However, great caution should be taken in experimental design and data interpretation when the amplicon sequencing-based detection approach is used for quantitative analysis of the β-diversity of microbial communities. PMID:21346791
Dimensional Comparisons: An Experimental Approach to the Internal/External Frame of Reference Model.
ERIC Educational Resources Information Center
Moller, Jens; Koller, Olaf
2001-01-01
Three experimental studies investigated psychological processes underlying the effects of achievement in one domain and on self-perceived competence in another. In Study 1, high achievement in one domain led to lower self-perceived competence in the other. Study 2 showed inverse effects on self-perceived competence based on achievement feedback.…
A Musical Approach to Reading Fluency: An Experimental Study in First-Grade Classrooms
ERIC Educational Resources Information Center
Leguizamon, Daniel F.
2010-01-01
The purpose of this quantitative, quasi-experimental study was to investigate the relationship between Kodaly-based music instruction and reading fluency in first-grade classrooms. Reading fluency and overall reading achievement were measured for 109 participants at mid-point in the academic year pre- and post treatment. Tests were carried out to…
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T
2018-03-14
Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.
Arithmetic Circuit Verification Based on Symbolic Computer Algebra
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo
This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
Laboratory test methods for combustion stability properties of solid propellants
NASA Technical Reports Server (NTRS)
Strand, L. D.; Brown, R. S.
1992-01-01
An overview is presented of experimental methods for determining the combustion-stability properties of solid propellants. The methods are generally based on either the temporal response to an initial disturbance or on external methods for generating the required oscillations. The size distribution of condensed-phase combustion products are characterized by means of the experimental approaches. The 'T-burner' approach is shown to assist in the derivation of pressure-coupled driving contributions and particle damping in solid-propellant rocket motors. Other techniques examined include the rotating-valve apparatus, the impedance tube, the modulated throat-acoustic damping burner, and the magnetic flowmeter. The paper shows that experimental methods do not exist for measuring the interactions between acoustic velocity oscillations and burning propellant.
CFD Modeling of a CFB Riser Using Improved Inlet Boundary Conditions
NASA Astrophysics Data System (ADS)
Peng, B. T.; Zhang, C.; Zhu, J. X.; Qi, X. B.
2010-03-01
A computational fluid dynamics (CFD) model based on Eulerian-Eulerian approach coupled with granular kinetics theory was adopted to investigate the hydrodynamics and flow structures in a circulating fluidized bed (CFB) riser column. A new approach to specify the inlet boundary conditions was proposed in this study to simulate gas-solids flow in CFB risers more accurately. Simulation results were compared with the experimental data, and good agreement between the numerical results and experimental data was observed under different operating conditions, which indicates the effectiveness and accuracy of the CFD model with the proposed inlet boundary conditions. The results also illustrate a clear core annulus structure in the CFB riser under all operating conditions both experimentally and numerically.
Adaptive Control of Four-Leg VSC Based DSTATCOM in Distribution System
NASA Astrophysics Data System (ADS)
Singh, Bhim; Arya, Sabha Raj
2014-01-01
This work discusses an experimental performance of a four-leg Distribution Static Compensator (DSTATCOM) using an adaptive filter based approach. It is used for estimation of reference supply currents through extracting the fundamental active power components of three-phase distorted load currents. This control algorithm is implemented on an assembled DSTATCOM for harmonics elimination, neutral current compensation and load balancing, under nonlinear loads. Experimental results are discussed, and it is noticed that DSTATCOM is effective solution to perform satisfactory performance under load dynamics.
An Approach to the Evaluation of Hypermedia.
ERIC Educational Resources Information Center
Knussen, Christina; And Others
1991-01-01
Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…
EXPERIMENTAL AND THEORETICAL EVALUATIONS OF OBSERVATIONAL-BASED TECHNIQUES
Observational Based Methods (OBMs) can be used by EPA and the States to develop reliable ozone controls approaches. OBMs use actual measured concentrations of ozone, its precursors, and other indicators to determine the most appropriate strategy for ozone control. The usual app...
Streamlined approaches that use in vitro experimental data to predict chemical toxicokinetics (TK) are increasingly being used to perform risk-based prioritization based upon dosimetric adjustment of high-throughput screening (HTS) data across thousands of chemicals. However, ass...
Stone-Weiss, Nicholas; Pierce, Eric M; Youngman, Randall E; Gulbiten, Ozgur; Smith, Nicholas J; Du, Jincheng; Goel, Ashutosh
2018-01-01
The past decade has witnessed a significant upsurge in the development of borate and borosilicate based resorbable bioactive glasses owing to their faster degradation rate in comparison to their silicate counterparts. However, due to our lack of understanding about the fundamental science governing the aqueous corrosion of these glasses, most of the borate/borosilicate based bioactive glasses reported in the literature have been designed by "trial-and-error" approach. With an ever-increasing demand for their application in treating a broad spectrum of non-skeletal health problems, it is becoming increasingly difficult to design advanced glass formulations using the same conventional approach. Therefore, a paradigm shift from the "trial-and-error" approach to "materials-by-design" approach is required to develop new-generations of bioactive glasses with controlled release of functional ions tailored for specific patients and disease states, whereby material functions and properties can be predicted from first principles. Realizing this goal, however, requires a thorough understanding of the complex sequence of reactions that control the dissolution kinetics of bioactive glasses and the structural drivers that govern them. While there is a considerable amount of literature published on chemical dissolution behavior and apatite-forming ability of potentially bioactive glasses, the majority of this literature has been produced on silicate glass chemistries using different experimental and measurement protocols. It follows that inter-comparison of different datasets reveals inconsistencies between experimental groups. There are also some major experimental challenges or choices that need to be carefully navigated to unearth the mechanisms governing the chemical degradation behavior and kinetics of boron-containing bioactive glasses, and to accurately determine the composition-structure-property relationships. In order to address these challenges, a simplified borosilicate based model melt-quenched bioactive glass system has been studied to depict the impact of thermal history on its molecular structure and dissolution behavior in water. It has been shown that the methodology of quenching of the glass melt impacts the dissolution rate of the studied glasses by 1.5×-3× depending on the changes induced in their molecular structure due to variation in thermal history. Further, a recommendation has been made to study dissolution behavior of bioactive glasses using surface area of the sample - to - volume of solution (SA/V) approach instead of the currently followed mass of sample - to - volume of solution approach. The structural and chemical dissolution data obtained from bioactive glasses following the approach presented in this paper can be used to develop the structural descriptors and potential energy functions over a broad range of bioactive glass compositions. Realizing the goal of designing third generation bioactive glasses requires a thorough understanding of the complex sequence of reactions that control their rate of degradation (in physiological fluids) and the structural drivers that control them. In this article, we have highlighted some major experimental challenges and choices that need to be carefully navigated in order to unearth the mechanisms governing the chemical dissolution behavior of borosilicate based bioactive glasses. The proposed experimental approach allows us to gain a new level of conceptual understanding about the composition-structure-property relationships in these glass systems, which can be applied to attain a significant leap in designing borosilicate based bioactive glasses with controlled dissolution rates tailored for specific patient and disease states. Copyright © 2017 Acta Materialia Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shrestha, K.; Gofryk, K.
2018-04-01
We have designed and developed a new experimental setup, based on the 3ω method, to measure thermal conductivity, heat capacity, and electrical resistivity of a variety of samples in a broad temperature range (2-550 K) and under magnetic fields up to 9 T. The validity of this method is tested by measuring various types of metallic (copper, platinum, and constantan) and insulating (SiO2) materials, which have a wide range of thermal conductivity values (1-400 W m-1 K-1). We have successfully employed this technique for measuring the thermal conductivity of two actinide single crystals: uranium dioxide and uranium nitride. This new experimental approach for studying nuclear materials will help us to advance reactor fuel development and understanding. We have also shown that this experimental setup can be adapted to the Physical Property Measurement System (Quantum Design) environment and/or other cryocooler systems.
Computational Approaches to Nucleic Acid Origami.
Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo
2015-10-12
Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Studying light-harvesting models with superconducting circuits.
Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas
2018-03-02
The process of photosynthesis, the main source of energy in the living world, converts sunlight into chemical energy. The high efficiency of this process is believed to be enabled by an interplay between the quantum nature of molecular structures in photosynthetic complexes and their interaction with the environment. Investigating these effects in biological samples is challenging due to their complex and disordered structure. Here we experimentally demonstrate a technique for studying photosynthetic models based on superconducting quantum circuits, which complements existing experimental, theoretical, and computational approaches. We demonstrate a high degree of freedom in design and experimental control of our approach based on a simplified three-site model of a pigment protein complex with realistic parameters scaled down in energy by a factor of 10 5 . We show that the excitation transport between quantum-coherent sites disordered in energy can be enabled through the interaction with environmental noise. We also show that the efficiency of the process is maximized for structured noise resembling intramolecular phononic environments found in photosynthetic complexes.
Bauer, Sebastian; van Alphen, Natascha; Becker, Albert; Chiocchetti, Andreas; Deichmann, Ralf; Deller, Thomas; Freiman, Thomas; Freitag, Christine M; Gehrig, Johannes; Hermsen, Anke M; Jedlicka, Peter; Kell, Christian; Klein, Karl Martin; Knake, Susanne; Kullmann, Dimitri M; Liebner, Stefan; Norwood, Braxton A; Omigie, Diana; Plate, Karlheinz; Reif, Andreas; Reif, Philipp S; Reiss, Yvonne; Roeper, Jochen; Ronellenfitsch, Michael W; Schorge, Stephanie; Schratt, Gerhard; Schwarzacher, Stephan W; Steinbach, Joachim P; Strzelczyk, Adam; Triesch, Jochen; Wagner, Marlies; Walker, Matthew C; von Wegner, Frederic; Rosenow, Felix
2017-11-01
Despite the availability of more than 15 new "antiepileptic drugs", the proportion of patients with pharmacoresistant epilepsy has remained constant at about 20-30%. Furthermore, no disease-modifying treatments shown to prevent the development of epilepsy following an initial precipitating brain injury or to reverse established epilepsy have been identified to date. This is likely in part due to the polyetiologic nature of epilepsy, which in turn requires personalized medicine approaches. Recent advances in imaging, pathology, genetics, and epigenetics have led to new pathophysiological concepts and the identification of monogenic causes of epilepsy. In the context of these advances, the First International Symposium on Personalized Translational Epilepsy Research (1st ISymPTER) was held in Frankfurt on September 8, 2016, to discuss novel approaches and future perspectives for personalized translational research. These included new developments and ideas in a range of experimental and clinical areas such as deep phenotyping, quantitative brain imaging, EEG/MEG-based analysis of network dysfunction, tissue-based translational studies, innate immunity mechanisms, microRNA as treatment targets, functional characterization of genetic variants in human cell models and rodent organotypic slice cultures, personalized treatment approaches for monogenic epilepsies, blood-brain barrier dysfunction, therapeutic focal tissue modification, computational modeling for target and biomarker identification, and cost analysis in (monogenic) disease and its treatment. This report on the meeting proceedings is aimed at stimulating much needed investments of time and resources in personalized translational epilepsy research. This Part II includes the experimental and translational approaches and a discussion of the future perspectives, while the diagnostic methods, EEG network analysis, biomarkers, and personalized treatment approaches were addressed in Part I [1]. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
Petukh, Marharyta; Li, Minghui; Alexov, Emil
2015-07-01
A new methodology termed Single Amino Acid Mutation based change in Binding free Energy (SAAMBE) was developed to predict the changes of the binding free energy caused by mutations. The method utilizes 3D structures of the corresponding protein-protein complexes and takes advantage of both approaches: sequence- and structure-based methods. The method has two components: a MM/PBSA-based component, and an additional set of statistical terms delivered from statistical investigation of physico-chemical properties of protein complexes. While the approach is rigid body approach and does not explicitly consider plausible conformational changes caused by the binding, the effect of conformational changes, including changes away from binding interface, on electrostatics are mimicked with amino acid specific dielectric constants. This provides significant improvement of SAAMBE predictions as indicated by better match against experimentally determined binding free energy changes over 1300 mutations in 43 proteins. The final benchmarking resulted in a very good agreement with experimental data (correlation coefficient 0.624) while the algorithm being fast enough to allow for large-scale calculations (the average time is less than a minute per mutation).
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
NASA Astrophysics Data System (ADS)
Avitabile, P.; O'Callahan, J.
2003-07-01
Inclusion of rotational effects is critical for the accuracy of the predicted system characteristics, in almost all system modelling studies. However, experimentally derived information for the description of one or more of the components for the system will generally not have any rotational effects included in the description of the component. The lack of rotational effects has long affected the results from any system model development whether using a modal-based approach or an impedance-based approach. Several new expansion processes are described herein for the development of FRFs needed for impedance-based system models. These techniques expand experimentally derived mode shapes, residual modes from the modal parameter estimation process and FRFs directly to allow for the inclusion of the necessary rotational dof. The FRFs involving translational to rotational dofs are developed as well as the rotational to rotational dof. Examples are provided to show the use of these techniques.
Wahman, David G; Speitel, Gerald E; Katz, Lynn E
2017-11-21
Chloramine chemistry is complex, with a variety of reactions occurring in series and parallel and many that are acid or base catalyzed, resulting in numerous rate constants. Bromide presence increases system complexity even further with possible bromamine and bromochloramine formation. Therefore, techniques for parameter estimation must address this complexity through thoughtful experimental design and robust data analysis approaches. The current research outlines a rational basis for constrained data fitting using Brønsted theory, application of the microscopic reversibility principle to reversible acid or base catalyzed reactions, and characterization of the relative significance of parallel reactions using fictive product tracking. This holistic approach was used on a comprehensive and well-documented data set for bromamine decomposition, allowing new interpretations of existing data by revealing that a previously published reaction scheme was not robust; it was not able to describe monobromamine or dibromamine decay outside of the conditions for which it was calibrated. The current research's simplified model (3 reactions, 17 constants) represented the experimental data better than the previously published model (4 reactions, 28 constants). A final model evaluation was conducted based on representative drinking water conditions to determine a minimal model (3 reactions, 8 constants) applicable for drinking water conditions.
Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2017-08-01
The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.
Furedy, John J
2003-11-01
The differential/experimental distinction that Cronbach specified is important because any adequate account of psychological phenomena requires the recognition of the validity of both approaches, and a meaningful melding of the two. This paper suggests that Pavlov's work in psychology, based on earlier traditions of inquiry that can be traced back to the pre-Socratics, provides a potential way of achieving this melding, although such features as systematic rather than anecdotal methods of observation need to be added. Pavlov's methodological behaviorist approach is contrasted with metaphysical behaviorism (as exemplified explicitly in Watson and Skinner, and implicitly in the computer-metaphorical, information-processing explanations employed by current "cognitive" psychology). A common feature of the metaphysical approach is that individual-differences variables like sex are essentially ignored, or relegated to ideological categories such as the treatment of sex as merely a "social construction." Examples of research both before and after the "cognitive revolution" are presented where experimental and differential methods are melded, and individual differences are treated as phenomena worthy of investigation rather than as nuisance factors that merely add to experimental error.
Metainference: A Bayesian inference method for heterogeneous systems.
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.
Metacognitive Skills Development: A Web-Based Approach in Higher Education
ERIC Educational Resources Information Center
Shen, Chun-Yi; Liu, Hsiu-Chuan
2011-01-01
Although there were studies that presented the applications of metacognitive skill training, the research on web-based metacognitive skills training are few. The purpose of this study is to design a web-based learning environment and further examine the effect of the web-based training. A pretest-posttest quasi-experimental design was used in this…
Effects of a Format-based Second Language Teaching Method in Kindergarten.
ERIC Educational Resources Information Center
Uilenburg, Noelle; Plooij, Frans X.; de Glopper, Kees; Damhuis, Resi
2001-01-01
Focuses on second language teaching with a format-based method. The differences between a format-based teaching method and a standard approach used as treatments in a quasi-experimental, non-equivalent control group are described in detail. Examines whether the effects of a format-based teaching method and a standard foreign language method differ…
Descriptive vs. mechanistic network models in plant development in the post-genomic era.
Davila-Velderrain, J; Martinez-Garcia, J C; Alvarez-Buylla, E R
2015-01-01
Network modeling is now a widespread practice in systems biology, as well as in integrative genomics, and it constitutes a rich and diverse scientific research field. A conceptually clear understanding of the reasoning behind the main existing modeling approaches, and their associated technical terminologies, is required to avoid confusions and accelerate the transition towards an undeniable necessary more quantitative, multidisciplinary approach to biology. Herein, we focus on two main network-based modeling approaches that are commonly used depending on the information available and the intended goals: inference-based methods and system dynamics approaches. As far as data-based network inference methods are concerned, they enable the discovery of potential functional influences among molecular components. On the other hand, experimentally grounded network dynamical models have been shown to be perfectly suited for the mechanistic study of developmental processes. How do these two perspectives relate to each other? In this chapter, we describe and compare both approaches and then apply them to a given specific developmental module. Along with the step-by-step practical implementation of each approach, we also focus on discussing their respective goals, utility, assumptions, and associated limitations. We use the gene regulatory network (GRN) involved in Arabidopsis thaliana Root Stem Cell Niche patterning as our illustrative example. We show that descriptive models based on functional genomics data can provide important background information consistent with experimentally supported functional relationships integrated in mechanistic GRN models. The rationale of analysis and modeling can be applied to any other well-characterized functional developmental module in multicellular organisms, like plants and animals.
Xiawa Wu; Robert J. Moon; Ashlie Martini
2013-01-01
The elastic modulus of cellulose IÃ in the axial and transverse directions was obtained from atomistic simulations using both the standard uniform deformation approach and a complementary approach based on nanoscale indentation. This allowed comparisons between the methods and closer connectivity to experimental measurement techniques. A reactive...
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
ERIC Educational Resources Information Center
Mitra, Jay
2017-01-01
This article explores the development of a comprehensive and systemic approach to entrepreneurship education at a research-intensive university in the United Kingdom. The exploration is based on two key conceptual challenges: (a) taking entrepreneurship to mean something more than new business creation and (b) differentiating between…
Evaluating the Effectiveness of Remedial Reading Courses at Community Colleges: A Quantitative Study
ERIC Educational Resources Information Center
Lavonier, Nicole
2014-01-01
The present study evaluated the effectiveness of two instructional approaches for remedial reading courses at a community college. The instructional approaches were strategic reading and traditional, textbook-based instruction. The two research questions that guided the quantitative, quasi-experimental study were: (a) what is the effect of…
Novel Computational Approaches to Drug Discovery
NASA Astrophysics Data System (ADS)
Skolnick, Jeffrey; Brylinski, Michal
2010-01-01
New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.
Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches
NASA Astrophysics Data System (ADS)
Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano
2014-12-01
The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
X-ray natural widths, level widths and Coster-Kronig transition probabilities
NASA Astrophysics Data System (ADS)
Papp, T.; Campbell, J. L.; Varga, D.
1997-01-01
A critical review is given for the K-N7 atomic level widths. The experimental level widths were collected from x-ray photoelectron spectroscopy (XPS), x-ray emission spectroscopy (XES), x-ray spectra fluoresced by synchrotron radiation, and photoelectrons from x-ray absorption (PAX). There are only limited atomic number ranges for a few atomic levels where data are available from more than one source. Generally the experimental level widths have large scatter compared to the reported error bars. The experimental data are compared with the recent tabulation of Perkins et al. and of Ohno et al. Ohno et al. performed a many body approach calculation for limited atomic number ranges and have obtained reasonable agreement with the experimental data. Perkins et al. presented a tabulation covering the K-Q1 shells of all atoms, based on extensions of the Scofield calculations for radiative rates and extensions of the Chen calculations for non-radiative rates. The experimental data are in disagreement with this tabulation, in excess of a factor of two in some cases. A short introduction to the experimental Coster-Kronig transition probabilities is presented. It is our opinion that the different experimental approaches result in systematically different experimental data.
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.
2007-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.
NASA Astrophysics Data System (ADS)
Pegu, David; Deb, Jyotirmoy; Saha, Sandip Kumar; Paul, Manoj Kumar; Sarkar, Utpal
2018-05-01
In this work, we have synthesized new coumarin Schiff base molecule, viz., 6-(4-n-heptyloxybenzyoloxy)-2-hydroxybenzylidene)amino)-2H-chromen-2-one and characterized its structural, electronic and spectroscopic properties experimentally and theoretically. The theoretical analysis of UV-visible absorption spectra reflects a red shift in the absorption maximum in comparison to the experimental results. Most of the vibrational assignments of infrared and Raman spectra predicted using density functional theory approach match well with the experimental findings. Further, the chemical reactivity analysis confirms that solvent highly affects the reactivity of the studied compound. The large hyperpolarizability value of the compound concludes that the system exhibits significant nonlinear optical features and thus, points out their possibility in designing material with high nonlinear activity.
Modeling of circulating fluised beds for post-combustion carbon capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, A.; Shadle, L.; Miller, D.
2011-01-01
A compartment based model for a circulating fluidized bed reactor has been developed based on experimental observations of riser hydrodynamics. The model uses a cluster based approach to describe the two-phase behavior of circulating fluidized beds. Fundamental mass balance equations have been derived to describe the movement of both gas and solids though the system. Additional work is being performed to develop the correlations required to describe the hydrodynamics of the system. Initial testing of the model with experimental data shows promising results and highlights the importance of including end effects within the model.
Patterns recognition of electric brain activity using artificial neural networks
NASA Astrophysics Data System (ADS)
Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.
2017-04-01
An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.
Inquiry-based experiments for large-scale introduction to PCR and restriction enzyme digests.
Johanson, Kelly E; Watt, Terry J
2015-01-01
Polymerase chain reaction and restriction endonuclease digest are important techniques that should be included in all Biochemistry and Molecular Biology laboratory curriculums. These techniques are frequently taught at an advanced level, requiring many hours of student and faculty time. Here we present two inquiry-based experiments that are designed for introductory laboratory courses and combine both techniques. In both approaches, students must determine the identity of an unknown DNA sequence, either a gene sequence or a primer sequence, based on a combination of PCR product size and restriction digest pattern. The experimental design is flexible, and can be adapted based on available instructor preparation time and resources, and both approaches can accommodate large numbers of students. We implemented these experiments in our courses with a combined total of 584 students and have an 85% success rate. Overall, students demonstrated an increase in their understanding of the experimental topics, ability to interpret the resulting data, and proficiency in general laboratory skills. © 2015 The International Union of Biochemistry and Molecular Biology.
ERIC Educational Resources Information Center
Turnšek, Nada
2013-01-01
The present study is based on a quasi-experimental research design and presents the results of an evaluation of Antidiscrimination and Diversity Training that took place at the Faculty of Education in Ljubljana, rooted in the anti-bias approach to educating diversity and equality issues (Murray & Urban, 2012). The experimental group included…
Defense Science Board Task Force Report on Next-Generation Unmanned Undersea Systems
2016-10-01
active learning occurs in an environment that extends beyondchoreographed demonstrations designed to validate pre -determined hypotheses. Finally, when...4 OPNAV N99 should coordinate a broad-based design , development, and experimental effort to bypass traditional limitations for unmanned undersea...approaches that could facilitate rapid experimentation , operational demonstration of capabilities, and deployment of initial capabilities that show
"No Excuses" Charter Schools: A Meta-Analysis of the Experimental Evidence on Student Achievement
ERIC Educational Resources Information Center
Cheng, Albert; Hitt, Collin; Kisida, Brian; Mills, Jonathan N.
2017-01-01
Many most well-known charter schools in the United States use a "No Excuses" approach. We conduct the first meta-analysis of the achievement impacts of No Excuses charter schools, focusing on experimental, lottery-based studies. We estimate that No Excuses charter schools increase student math and literacy achievement by 0.25 and 0.17,…
Austen, Emily J.; Weis, Arthur E.
2016-01-01
Our understanding of selection through male fitness is limited by the resource demands and indirect nature of the best available genetic techniques. Applying complementary, independent approaches to this problem can help clarify evolution through male function. We applied three methods to estimate selection on flowering time through male fitness in experimental populations of the annual plant Brassica rapa: (i) an analysis of mating opportunity based on flower production schedules, (ii) genetic paternity analysis, and (iii) a novel approach based on principles of experimental evolution. Selection differentials estimated by the first method disagreed with those estimated by the other two, indicating that mating opportunity was not the principal driver of selection on flowering time. The genetic and experimental evolution methods exhibited striking agreement overall, but a slight discrepancy between the two suggested that negative environmental covariance between age at flowering and male fitness may have contributed to phenotypic selection. Together, the three methods enriched our understanding of selection on flowering time, from mating opportunity to phenotypic selection to evolutionary response. The novel experimental evolution method may provide a means of examining selection through male fitness when genetic paternity analysis is not possible. PMID:26911957
Early stage breast cancer detection by means of time-domain ultra-wide band sensing
NASA Astrophysics Data System (ADS)
Zanoon, T. F.; Abdullah, M. Z.
2011-11-01
The interest in the use of ultra-wide band (UWB) impulses for medical imaging, particularly early stage breast cancer detection, is driven by safety advantage, super resolution capability, significant dielectric contrast between tumours and their surrounding tissues, patient convenience and low operating costs. However, inversion algorithms leading to recovery of the dielectric profile are complex in their nature, and vulnerable to noisy experimental conditions and environment. In this paper, we present a simplified yet robust gradient-based iterative image reconstruction technique to solve the nonlinear inverse scattering problem. The calculation is based on the Polak-Ribière's approach while the Broyden's formula is used to update the gradient in an iterative scheme. To validate this approach, both numerical and experimental results are presented. Animal derived biological targets in the form of chicken skin, beef and salted butter are used to construct an experimental breast phantom, while vegetable oil is used as a background media. UWB transceivers in the form of biconical antennas contour the breast forming a full view scanning geometry at a frequency range of 0-5 GHz. Results indicate the feasibility of experimental detection of millimetre scaled targets.
NASA Astrophysics Data System (ADS)
Dür, Wolfgang; Lamprecht, Raphael; Heusler, Stefan
2017-07-01
A long-range quantum communication network is among the most promising applications of emerging quantum technologies. We discuss the potential of such a quantum internet for the secure transmission of classical and quantum information, as well as theoretical and experimental approaches and recent advances to realize them. We illustrate the involved concepts such as error correction, teleportation or quantum repeaters and consider an approach to this topic based on catchy visualizations as a context-based, modern treatment of quantum theory at high school.
Experimental measurements of motion cue effects on STOL approach tasks
NASA Technical Reports Server (NTRS)
Ringland, R. F.; Stapleford, R. L.
1972-01-01
An experimental program to investigate the effects of motion cues on STOL approach is presented. The simulator used was the Six-Degrees-of-Freedom Motion Simulator (S.01) at Ames Research Center of NASA which has ?2.7 m travel longitudinally and laterally and ?2.5 m travel vertically. Three major experiments, characterized as tracking tasks, were conducted under fixed and moving base conditions: (1) A simulated IFR approach of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA), (2) a simulated VFR task with the same aircraft, and (3) a single-axis task having only linear acceleration as the motion cue. Tracking performance was measured in terms of the variances of several motion variables, pilot vehicle describing functions, and pilot commentary.
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W.; Peng, Qiyu
2016-01-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials. PMID:27660376
Harmonic wavelet packet transform for on-line system health diagnosis
NASA Astrophysics Data System (ADS)
Yan, Ruqiang; Gao, Robert X.
2004-07-01
This paper presents a new approach to on-line health diagnosis of mechanical systems, based on the wavelet packet transform. Specifically, signals acquired from vibration sensors are decomposed into sub-bands by means of the discrete harmonic wavelet packet transform (DHWPT). Based on the Fisher linear discriminant criterion, features in the selected sub-bands are then used as inputs to three classifiers (Nearest Neighbor rule-based and two Neural Network-based), for system health condition assessment. Experimental results have confirmed that, comparing to the conventional approach where statistical parameters from raw signals are used, the presented approach enabled higher signal-to-noise ratio for more effective and intelligent use of the sensory information, thus leading to more accurate system health diagnosis.
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V
2015-12-01
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.
1981-08-01
electro - optic effect is investigated both theoretically and experimentally. The theoretical approach is based upon W.A. Harrison’s ’Bond-Orbital Model’. The separate electronic and lattice contributions to the second-order, electro - optic susceptibility are examined within the context of this model and formulae which can accommodate any crystal structure are presented. In addition, a method for estimating the lattice response to a low frequency (dc) electric field is outlined. Finally, experimental measurements of the electro -
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonior, Jason D; Hu, Zhen; Guo, Terry N.
This letter presents an experimental demonstration of software-defined-radio-based wireless tomography using computer-hosted radio devices called Universal Software Radio Peripheral (USRP). This experimental brief follows our vision and previous theoretical study of wireless tomography that combines wireless communication and RF tomography to provide a novel approach to remote sensing. Automatic data acquisition is performed inside an RF anechoic chamber. Semidefinite relaxation is used for phase retrieval, and the Born iterative method is utilized for imaging the target. Experimental results are presented, validating our vision of wireless tomography.
A data driven control method for structure vibration suppression
NASA Astrophysics Data System (ADS)
Xie, Yangmin; Wang, Chao; Shi, Hang; Shi, Junwei
2018-02-01
High radio-frequency space applications have motivated continuous research on vibration suppression of large space structures both in academia and industry. This paper introduces a novel data driven control method to suppress vibrations of flexible structures and experimentally validates the suppression performance. Unlike model-based control approaches, the data driven control method designs a controller directly from the input-output test data of the structure, without requiring parametric dynamics and hence free of system modeling. It utilizes the discrete frequency response via spectral analysis technique and formulates a non-convex optimization problem to obtain optimized controller parameters with a predefined controller structure. Such approach is then experimentally applied on an end-driving flexible beam-mass structure. The experiment results show that the presented method can achieve competitive disturbance rejections compared to a model-based mixed sensitivity controller under the same design criterion but with much less orders and design efforts, demonstrating the proposed data driven control is an effective approach for vibration suppression of flexible structures.
Homeopathic Prevention and Management of Epidemic Diseases.
Jacobs, Jennifer
2018-05-12
Homeopathy has been used to treat epidemic diseases since the time of Hahnemann, who used Belladonna to treat scarlet fever. Since then, several approaches using homeopathy for epidemic diseases have been proposed, including individualization, combination remedies, genus epidemicus, and isopathy. The homeopathic research literature was searched to find examples of each of these approaches and to evaluate which were effective. There is good experimental evidence for each of these approaches. While individualization is the gold standard, it is impractical to use on a widespread basis. Combination remedies can be effective but must be based on the symptoms of a given epidemic in a specific location. Treatment with genus epidemicus can also be successful if based on data from many practitioners. Finally, isopathy shows promise and might be more readily accepted by mainstream medicine due to its similarity to vaccination. Several different homeopathic methods can be used to treat epidemic diseases. The challenge for the future is to refine these approaches and to build on the knowledge base with additional rigorous trials. If and when conventional medicine runs out of options for treating epidemic diseases, homeopathy could be seen as an attractive alternative, but only if there is viable experimental evidence of its success. The Faculty of Homeopathy.
Jalali, Farzad; Hasani, Alireza; Hashemi, Seyedeh Fatemeh; Kimiaei, Seyed Ali; Babaei, Ali
2018-06-01
Depression is one the most common mental disorders in prisons. People living with HIV are more likely to develop psychological difficulties when compared with the general population. This study aims to determine the efficacy of cognitive group therapy based on schema-focused approach in reducing depression in prisoners living with HIV. The design of this study was between-groups (or "independent measures"). It was conducted with pretest, posttest, and waiting list control group. The research population comprised all prisoners living with HIV in a men's prison in Iran. Based on voluntary desire, screening, and inclusion criteria, 42 prisoners living with HIV participated in this study. They were randomly assigned to an experimental group (21 prisoners) and waiting list control group (21 prisoners). The experimental group received 11 sessions of schema-focused cognitive group therapy, while the waiting list control group received the treatment after the completion of the study. The various groups were evaluated in terms of depression. ANCOVA models were employed to test the study hypotheses. Collated results indicated that depression was reduced among prisoners in the experimental group. Schema therapy (ST) could reduce depression among prisoners living with HIV/AIDS.
Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio
2013-08-01
The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Automatic negotiation: playing the domain instead of the opponent
NASA Astrophysics Data System (ADS)
Erez, Eden S.; Zuckerman, Inon; Hermel, Dror
2017-05-01
An automated negotiator is an intelligent agent whose task is to reach the best possible agreement. We explore a novel approach to developing a negotiation strategy, a 'domain-based approach'. Specifically, we use two domain parameters, reservation value and discount factor, to cluster the domain into different regions, in each of which we employ a heuristic strategy based on the notions of temporal flexibility and bargaining strength. Following the presentation of our cognitive and formal models, we show in an extensive experimental study that an agent based on that approach wins against the top agents of the automated negotiation competition of 2012 and 2013, and attained the second place in 2014.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling
Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.
2008-01-01
We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748
Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A
2006-03-09
We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach, which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the nonaqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models.
He, Wei; Yurkevich, Igor V; Canham, Leigh T; Loni, Armando; Kaplan, Andrey
2014-11-03
We develop an analytical model based on the WKB approach to evaluate the experimental results of the femtosecond pump-probe measurements of the transmittance and reflectance obtained on thin membranes of porous silicon. The model allows us to retrieve a pump-induced nonuniform complex dielectric function change along the membrane depth. We show that the model fitting to the experimental data requires a minimal number of fitting parameters while still complying with the restriction imposed by the Kramers-Kronig relation. The developed model has a broad range of applications for experimental data analysis and practical implementation in the design of devices involving a spatially nonuniform dielectric function, such as in biosensing, wave-guiding, solar energy harvesting, photonics and electro-optical devices.
NASA Astrophysics Data System (ADS)
Celli, Paolo; Gonella, Stefano
2015-08-01
In this letter, we discuss a versatile, fully reconfigurable experimental platform for the investigation of phononic phenomena in metamaterial architectures. The approach revolves around the use of 3D laser vibrometry to reconstruct global and local wavefield features in specimens obtained through simple arrangements of LEGO® bricks on a thin baseplate. The agility by which it is possible to reconfigure the brick patterns into a nearly endless spectrum of topologies makes this an effective approach for rapid experimental proof of concept, as well as a powerful didactic tool, in the arena of phononic crystals and metamaterials engineering. We use our platform to provide a compelling visual illustration of important spatial wave manipulation effects (waveguiding and seismic isolation), and to elucidate fundamental dichotomies between Bragg-based and locally resonant bandgap mechanisms.
Adaptation of reference volumes for correlation-based digital holographic particle tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Peinke, Joachim; Gülker, Gerd
2018-04-01
Numerically reconstructed reference volumes tailored to particle images are used for particle position detection by means of three-dimensional correlation. After a first tracking of these positions, the experimentally recorded particle images are retrieved as a posteriori knowledge about the particle images in the system. This knowledge is used for a further refinement of the detected positions. A transparent description of the individual algorithm steps including the results retrieved with experimental data complete the paper. The work employs extraordinarily small particles, smaller than the pixel pitch of the camera sensor. It is the first approach known to the authors that combines numerical knowledge about particle images and particle images retrieved from the experimental system to an iterative particle tracking approach for digital holographic particle tracking velocimetry.
NASA Astrophysics Data System (ADS)
Kuntman, Ertan; Canillas, Adolf; Arteaga, Oriol
2017-11-01
Experimental Mueller matrices contain certain amount of uncertainty in their elements and these uncertainties can create difficulties for decomposition methods based on analytic solutions. In an earlier paper [1], we proposed a decomposition method for depolarizing Mueller matrices by using certain symmetry conditions. However, because of the experimental error, that method creates over-determined systems with non-unique solutions. Here we propose to use least squares minimization approach in order to improve the accuracy of our results. In this method, we are taking into account the number of independent parameters of the corresponding symmetry and the rank constraints on the component matrices to decide on our fitting model. This approach is illustrated with experimental Mueller matrices that include material media with different Mueller symmetries.
Three-dimensional broadband omnidirectional acoustic ground cloak
NASA Astrophysics Data System (ADS)
Zigoneanu, Lucian; Popa, Bogdan-Ioan; Cummer, Steven A.
2014-04-01
The control of sound propagation and reflection has always been the goal of engineers involved in the design of acoustic systems. A recent design approach based on coordinate transformations, which is applicable to many physical systems, together with the development of a new class of engineered materials called metamaterials, has opened the road to the unconstrained control of sound. However, the ideal material parameters prescribed by this methodology are complex and challenging to obtain experimentally, even using metamaterial design approaches. Not surprisingly, experimental demonstration of devices obtained using transformation acoustics is difficult, and has been implemented only in two-dimensional configurations. Here, we demonstrate the design and experimental characterization of an almost perfect three-dimensional, broadband, and, most importantly, omnidirectional acoustic device that renders a region of space three wavelengths in diameter invisible to sound.
NASA Astrophysics Data System (ADS)
Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang
2018-01-01
This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.
Armutlu, Pelin; Ozdemir, Muhittin E; Uney-Yuksektepe, Fadime; Kavakli, I Halil; Turkay, Metin
2008-10-03
A priori analysis of the activity of drugs on the target protein by computational approaches can be useful in narrowing down drug candidates for further experimental tests. Currently, there are a large number of computational methods that predict the activity of drugs on proteins. In this study, we approach the activity prediction problem as a classification problem and, we aim to improve the classification accuracy by introducing an algorithm that combines partial least squares regression with mixed-integer programming based hyper-boxes classification method, where drug molecules are classified as low active or high active regarding their binding activity (IC50 values) on target proteins. We also aim to determine the most significant molecular descriptors for the drug molecules. We first apply our approach by analyzing the activities of widely known inhibitor datasets including Acetylcholinesterase (ACHE), Benzodiazepine Receptor (BZR), Dihydrofolate Reductase (DHFR), Cyclooxygenase-2 (COX-2) with known IC50 values. The results at this stage proved that our approach consistently gives better classification accuracies compared to 63 other reported classification methods such as SVM, Naïve Bayes, where we were able to predict the experimentally determined IC50 values with a worst case accuracy of 96%. To further test applicability of this approach we first created dataset for Cytochrome P450 C17 inhibitors and then predicted their activities with 100% accuracy. Our results indicate that this approach can be utilized to predict the inhibitory effects of inhibitors based on their molecular descriptors. This approach will not only enhance drug discovery process, but also save time and resources committed.
Numerical and experimental study of electron-beam coatings with modifying particles FeB and FeTi
NASA Astrophysics Data System (ADS)
Kryukova, Olga; Kolesnikova, Kseniya; Gal'chenko, Nina
2016-07-01
An experimental study of wear-resistant composite coatings based on titanium borides synthesized in the process of electron-beam welding of components thermo-reacting powders are composed of boron-containing mixture. A model of the process of electron beam coating with modifying particles of boron and titanium based on physical-chemical transformations is supposed. The dissolution process is described on the basis of formal kinetic approach. The result of numerical solution is the phase and chemical composition of the coating under nonequilibrium conditions, which is one of the important characteristics of the coating forming during electron beam processing. Qualitative agreement numerical calculations with experimental data was shown.
Practical application of computer programs for supersonic combustion
NASA Technical Reports Server (NTRS)
Groves, F. R., Jr.
1972-01-01
Experimental data were interpreted using two supersonic combustion computer programs. The P1 program is based on a conventional boundary layer treatment of the mixing of concentric gas streams and complete combustion chemistry. The H1 program is based on a modified boundary layer approach which accounts for radial pressure gradients in the flow and also incorporates a finite rate chemistry calculation. The objective of the investigation was to compare the experimental data with theoretical predictions of the two programs with special emphasis on the prediction of radial pressure gradients by the H1 program. A test of the H1 program was also desired through comparison with the experimental data and with the P1 program.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Brookes, Emre; Cao, Weiming; Demeler, Borries
2010-02-01
We report a model-independent analysis approach for fitting sedimentation velocity data which permits simultaneous determination of shape and molecular weight distributions for mono- and polydisperse solutions of macromolecules. Our approach allows for heterogeneity in the frictional domain, providing a more faithful description of the experimental data for cases where frictional ratios are not identical for all components. Because of increased accuracy in the frictional properties of each component, our method also provides more reliable molecular weight distributions in the general case. The method is based on a fine grained two-dimensional grid search over s and f/f (0), where the grid is a linear combination of whole boundary models represented by finite element solutions of the Lamm equation with sedimentation and diffusion parameters corresponding to the grid points. A Monte Carlo approach is used to characterize confidence limits for the determined solutes. Computational algorithms addressing the very large memory needs for a fine grained search are discussed. The method is suitable for globally fitting multi-speed experiments, and constraints based on prior knowledge about the experimental system can be imposed. Time- and radially invariant noise can be eliminated. Serial and parallel implementations of the method are presented. We demonstrate with simulated and experimental data of known composition that our method provides superior accuracy and lower variance fits to experimental data compared to other methods in use today, and show that it can be used to identify modes of aggregation and slow polymerization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Gregory M.; Patel, Shrayesh N.; Pemmaraju, C. D.
The electronic structure and molecular orientation of semiconducting polymers in thin films determine their ability to transport charge. Methods based on near-edge X-ray absorption fine structure (NEXAFS) spectroscopy can be used to probe both the electronic structure and microstructure of semiconducting polymers in both crystalline and amorphous films. However, it can be challenging to interpret NEXAFS spectra on the basis of experimental data alone, and accurate, predictive calculations are needed to complement experiments. Here, we show that first-principles density functional theory (DFT) can be used to model NEXAFS spectra of semiconducting polymers and to identify the nature of transitions inmore » complicated NEXAFS spectra. Core-level X-ray absorption spectra of a set of semiconducting polymers were calculated using the excited electron and core-hole (XCH) approach based on constrained-occupancy DFT. A comparison of calculations on model oligomers and periodic structures with experimental data revealed the requirements for accurate prediction of NEXAFS spectra of both conjugated homopolymers and donor–acceptor polymers. The NEXAFS spectra predicted by the XCH approach were applied to study molecular orientation in donor–acceptor polymers using experimental spectra and revealed the complexity of using carbon edge spectra in systems with large monomeric units. The XCH approach has sufficient accuracy in predicting experimental NEXAFS spectra of polymers that it should be considered for design and analysis of measurements using soft X-ray techniques, such as resonant soft X-ray scattering and scanning transmission X-ray microscopy.« less
Accelerated gradient based diffuse optical tomographic image reconstruction.
Biswas, Samir Kumar; Rajan, K; Vasu, R M
2011-01-01
Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.
ERIC Educational Resources Information Center
Byun, Tara McAllister; Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose: Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of…
Can a Multimedia Tool Help Students' Learning Performance in Complex Biology Subjects?
ERIC Educational Resources Information Center
Koseoglu, Pinar; Efendioglu, Akin
2015-01-01
The aim of the present study was to determine the effects of multimedia-based biology teaching (Mbio) and teacher-centered biology (TCbio) instruction approaches on learners' biology achievements, as well as their views towards learning approaches. During the research process, an experimental design with two groups, TCbio (n = 22) and Mbio (n =…
ERIC Educational Resources Information Center
Kyriakides, L.; Christoforidou, M.; Panayiotou, A.; Creemers, B. P. M.
2017-01-01
The dynamic approach (DA) suggests that professional development should be differentiated to meet teachers' individual needs while engaging participants into systematic and guided critical reflection. Previous experimental studies demonstrated that one-year interventions based on the DA have a positive impact on teacher effectiveness. The study…
ERIC Educational Resources Information Center
Munier, Valerie; Merle, Helene
2009-01-01
The present study takes an interdisciplinary mathematics-physics approach to the acquisition of the concept of angle by children in Grades 3-5. This paper first presents the theoretical framework we developed, then we analyse the concept of angle and the difficulties pupils have with it. Finally, we report three experimental physics-based teaching…
Evidence-Based Medicine and Child Mental Health Services: A Broad Approach to Evaluation is Needed.
ERIC Educational Resources Information Center
McGuire, Jacqueline Barnes; And Others
1997-01-01
Describes quasi-experimental designs to be used as alternatives to randomized controlled trials in decisions concerning clinical practice and policy-making in the child mental health field. Highlights importance of taking a systems-level approach to evaluation, and describes ways in which qualitative outcomes measures can be used to sensitively…
Structure-based multiscale approach for identification of interaction partners of PDZ domains.
Tiwari, Garima; Mohanty, Debasisa
2014-04-28
PDZ domains are peptide recognition modules which mediate specific protein-protein interactions and are known to have a complex specificity landscape. We have developed a novel structure-based multiscale approach which identifies crucial specificity determining residues (SDRs) of PDZ domains from explicit solvent molecular dynamics (MD) simulations on PDZ-peptide complexes and uses these SDRs in combination with knowledge-based scoring functions for proteomewide identification of their interaction partners. Multiple explicit solvent simulations ranging from 5 to 50 ns duration have been carried out on 28 PDZ-peptide complexes with known binding affinities. MM/PBSA binding energy values calculated from these simulations show a correlation coefficient of 0.755 with the experimental binding affinities. On the basis of the SDRs of PDZ domains identified by MD simulations, we have developed a simple scoring scheme for evaluating binding energies for PDZ-peptide complexes using residue based statistical pair potentials. This multiscale approach has been benchmarked on a mouse PDZ proteome array data set by calculating the binding energies for 217 different substrate peptides in binding pockets of 64 different mouse PDZ domains. Receiver operating characteristic (ROC) curve analysis indicates that, the area under curve (AUC) values for binder vs nonbinder classification by our structure based method is 0.780. Our structure based method does not require experimental PDZ-peptide binding data for training.
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Blackmer, Rachel; Hayes-Harb, Rachel
2016-01-01
We present a community-based research project aimed at identifying effective methods and materials for teaching English literacy skills to adult English as a second language emergent readers. We conducted a quasi-experimental study whereby we evaluated the efficacy of two approaches, one based on current practices at the English Skills Learning…
ERIC Educational Resources Information Center
Popescu, E.
2010-01-01
Personalized instruction is seen as a desideratum of today's e-learning systems. The focus of this paper is on those platforms that use learning styles as personalization criterion called learning style-based adaptive educational systems. The paper presents an innovative approach based on an integrative set of learning preferences that alleviates…
ERIC Educational Resources Information Center
Lang'at, Edwin K.
2014-01-01
Purpose and Method of Study: The purpose of this study was to investigate teachers' self-perceived readiness to teach school-based HIV/AIDS Awareness and Prevention education in Kenyan primary schools based on their knowledge, attitudes and instructional confidence. This research utilized a non-experimental quantitative approach with a…
Yoon, Bo Kyeong; Jackman, Joshua A.; Valle-González, Elba R.
2018-01-01
Antimicrobial lipids such as fatty acids and monoglycerides are promising antibacterial agents that destabilize bacterial cell membranes, causing a wide range of direct and indirect inhibitory effects. The goal of this review is to introduce the latest experimental approaches for characterizing how antimicrobial lipids destabilize phospholipid membranes within the broader scope of introducing current knowledge about the biological activities of antimicrobial lipids, testing strategies, and applications for treating bacterial infections. To this end, a general background on antimicrobial lipids, including structural classification, is provided along with a detailed description of their targeting spectrum and currently understood antibacterial mechanisms. Building on this knowledge, different experimental approaches to characterize antimicrobial lipids are presented, including cell-based biological and model membrane-based biophysical measurement techniques. Particular emphasis is placed on drawing out how biological and biophysical approaches complement one another and can yield mechanistic insights into how the physicochemical properties of antimicrobial lipids influence molecular self-assembly and concentration-dependent interactions with model phospholipid and bacterial cell membranes. Examples of possible therapeutic applications are briefly introduced to highlight the potential significance of antimicrobial lipids for human health and medicine, and to motivate the importance of employing orthogonal measurement strategies to characterize the activity profile of antimicrobial lipids. PMID:29642500
Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger
2017-03-31
The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Noise Prediction of NASA SR2 Propeller in Transonic Conditions
NASA Astrophysics Data System (ADS)
Gennaro, Michele De; Caridi, Domenico; Nicola, Carlo De
2010-09-01
In this paper we propose a numerical approach for noise prediction of high-speed propellers for Turboprop applications. It is based on a RANS approach for aerodynamic simulation coupled with Ffowcs Williams-Hawkings (FW-H) Acoustic Analogy for propeller noise prediction. The test-case geometry adopted for this study is the 8-bladed NASA SR2 transonic cruise propeller, and simulated Sound Pressure Levels (SPL) have been compared with experimental data available from Wind Tunnel and Flight Tests for different microphone locations in a range of Mach numbers between 0.78 and 0.85 and rotational velocities between 7000 and 9000 rpm. Results show the ability of this approach to predict noise to within a few dB of experimental data. Moreover corrections are provided to be applied to acoustic numerical results in order for them to be compared with Wind Tunnel and Flight Test experimental data, as well computational grid requirements and guidelines in order to perform complete aerodynamic and aeroacoustic calculations with highly competitive computational cost.
Structural and Optical Properties Studies Of Ar2+ Ion Implanted Mn Deposited GaAs
NASA Astrophysics Data System (ADS)
De Gennaro, Michele; Caridi, Domenico; de Nicola, Carlo
2010-09-01
In this paper we propose a numerical approach for noise prediction of high-speed propellers for Turboprop applications. It is based on a RANS approach for aerodynamic simulation coupled with Ffowcs Williams-Hawkings (FW-H) Acoustic Analogy for propeller noise prediction. The test-case geometry adopted for this study is the 8-bladed NASA SR2 transonic cruise propeller, and simulated Sound Pressure Levels (SPL) have been compared with experimental data available from Wind Tunnel and Flight Tests for different microphone locations in a range of Mach numbers between 0.78 and 0.85 and rotational velocities between 7000 and 9000 rpm. Results show the ability of this approach to predict noise to within a few dB of experimental data. Moreover corrections are provided to be applied to acoustic numerical results in order for them to be compared with Wind Tunnel and Flight Test experimental data, as well computational grid requirements and guidelines in order to perform complete aerodynamic and aeroacoustic calculations with highly competitive computational cost.
A novel key-frame extraction approach for both video summary and video index.
Lei, Shaoshuai; Xie, Gang; Yan, Gaowei
2014-01-01
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
Norinder, U; Högberg, T
1992-04-01
The advantageous approach of using an experimentally designed training set as the basis for establishing a quantitative structure-activity relationship with good predictive capability is described. The training set was selected from a fractional factorial design scheme based on a principal component description of physico-chemical parameters of aromatic substituents. The derived model successfully predicts the activities of additional substituted benzamides of 6-methoxy-N-(4-piperidyl)salicylamide type. The major influence on activity of the 3-substituent is demonstrated.
ERIC Educational Resources Information Center
McDermott, Kathleen B.; Szpunar, Karl K.; Christ, Shawn E.
2009-01-01
In designing experiments to investigate retrieval of event memory, researchers choose between utilizing laboratory-based methods (in which to-be-remembered materials are presented to participants) and autobiographical approaches (in which the to-be-remembered materials are events from the participant's pre-experimental life). In practice, most…
NASA Astrophysics Data System (ADS)
Gerikh, Valentin; Kolosok, Irina; Kurbatsky, Victor; Tomin, Nikita
2009-01-01
The paper presents the results of experimental studies concerning calculation of electricity prices in different price zones in Russia and Europe. The calculations are based on the intelligent software "ANAPRO" that implements the approaches based on the modern methods of data analysis and artificial intelligence technologies.
Discussion Based Fish Bowl Strategy in Learning Psychology
ERIC Educational Resources Information Center
Singaravelu, G.
2007-01-01
The present study investigates the learning problems in psychology at Master of Education(M.Ed.,) in Bharathiar University and finds the effectiveness of Discussion Based Fish Bowl Strategy in learning psychology. Single group Experimental method was adopted for the study. Both qualitative and quantitative approaches were adopted for this study.…
A Clock Fingerprints-Based Approach for Wireless Transmitter Identification
NASA Astrophysics Data System (ADS)
Zhao, Caidan; Xie, Liang; Huang, Lianfen; Yao, Yan
Cognitive radio (CR) was proposed as one of the promising solutions for low spectrum utilization. However, security problems such as the primary user emulation (PUE) attack severely limit its applications. In this paper, we propose a clock fingerprints-based authentication approach to prevent PUE attacks in CR networks with the help of curve fitting and classifier. An experimental setup was constructed using the WLAN cards and software radio devices, and the corresponding results show that satisfied identification can be achieved for wireless transmitters.
NASA Astrophysics Data System (ADS)
Jabbour, Rabih E.; Wade, Mary; Deshpande, Samir V.; McCubbin, Patrick; Snyder, A. Peter; Bevilacqua, Vicky
2012-06-01
Mass spectrometry based proteomic approaches are showing promising capabilities in addressing various biological and biochemical issues. Outer membrane proteins (OMPs) are often associated with virulence in gram-negative pathogens and could prove to be excellent model biomarkers for strain level differentiation among bacteria. Whole cells and OMP extracts were isolated from pathogenic and non-pathogenic strains of Francisella tularensis, Burkholderia thailandensis, and Burkholderia mallei. OMP extracts were compared for their ability to differentiate and delineate the correct database organism to an experimental sample and for the degree of dissimilarity to the nearest-neighbor database strains. This study addresses the comparative experimental proteome analyses of OMPs vs. whole cell lysates on the strain-level discrimination among gram negative pathogenic and non-pathogenic strains.
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
A Robust Approach For Acoustic Noise Suppression In Speech Using ANFIS
NASA Astrophysics Data System (ADS)
Martinek, Radek; Kelnar, Michal; Vanus, Jan; Bilik, Petr; Zidek, Jan
2015-11-01
The authors of this article deals with the implementation of a combination of techniques of the fuzzy system and artificial intelligence in the application area of non-linear noise and interference suppression. This structure used is called an Adaptive Neuro Fuzzy Inference System (ANFIS). This system finds practical use mainly in audio telephone (mobile) communication in a noisy environment (transport, production halls, sports matches, etc). Experimental methods based on the two-input adaptive noise cancellation concept was clearly outlined. Within the experiments carried out, the authors created, based on the ANFIS structure, a comprehensive system for adaptive suppression of unwanted background interference that occurs in audio communication and degrades the audio signal. The system designed has been tested on real voice signals. This article presents the investigation and comparison amongst three distinct approaches to noise cancellation in speech; they are LMS (least mean squares) and RLS (recursive least squares) adaptive filtering and ANFIS. A careful review of literatures indicated the importance of non-linear adaptive algorithms over linear ones in noise cancellation. It was concluded that the ANFIS approach had the overall best performance as it efficiently cancelled noise even in highly noise-degraded speech. Results were drawn from the successful experimentation, subjective-based tests were used to analyse their comparative performance while objective tests were used to validate them. Implementation of algorithms was experimentally carried out in Matlab to justify the claims and determine their relative performances.
Identification of widespread adenosine nucleotide binding in Mycobacterium tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansong, Charles; Ortega, Corrie; Payne, Samuel H.
The annotation of protein function is almost completely performed by in silico approaches. However, computational prediction of protein function is frequently incomplete and error prone. In Mycobacterium tuberculosis (Mtb), ~25% of all genes have no predicted function and are annotated as hypothetical proteins. This lack of functional information severely limits our understanding of Mtb pathogenicity. Current tools for experimental functional annotation are limited and often do not scale to entire protein families. Here, we report a generally applicable chemical biology platform to functionally annotate bacterial proteins by combining activity-based protein profiling (ABPP) and quantitative LC-MS-based proteomics. As an example ofmore » this approach for high-throughput protein functional validation and discovery, we experimentally annotate the families of ATP-binding proteins in Mtb. Our data experimentally validate prior in silico predictions of >250 ATPases and adenosine nucleotide-binding proteins, and reveal 73 hypothetical proteins as novel ATP-binding proteins. We identify adenosine cofactor interactions with many hypothetical proteins containing a diversity of unrelated sequences, providing a new and expanded view of adenosine nucleotide binding in Mtb. Furthermore, many of these hypothetical proteins are both unique to Mycobacteria and essential for infection, suggesting specialized functions in mycobacterial physiology and pathogenicity. Thus, we provide a generally applicable approach for high throughput protein function discovery and validation, and highlight several ways in which application of activity-based proteomics data can improve the quality of functional annotations to facilitate novel biological insights.« less
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fast and efficient indexing approach for object recognition
NASA Astrophysics Data System (ADS)
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
Hou, Guangjin; Gupta, Rupal; Polenova, Tatyana; Vega, Alexander J
2014-02-01
Proton chemical shifts are a rich probe of structure and hydrogen bonding environments in organic and biological molecules. Until recently, measurements of 1 H chemical shift tensors have been restricted to either solid systems with sparse proton sites or were based on the indirect determination of anisotropic tensor components from cross-relaxation and liquid-crystal experiments. We have introduced an MAS approach that permits site-resolved determination of CSA tensors of protons forming chemical bonds with labeled spin-1/2 nuclei in fully protonated solids with multiple sites, including organic molecules and proteins. This approach, originally introduced for the measurements of chemical shift tensors of amide protons, is based on three RN -symmetry based experiments, from which the principal components of the 1 H CS tensor can be reliably extracted by simultaneous triple fit of the data. In this article, we expand our approach to a much more challenging system involving aliphatic and aromatic protons. We start with a review of the prior work on experimental-NMR and computational-quantum-chemical approaches for the measurements of 1 H chemical shift tensors and for relating these to the electronic structures. We then present our experimental results on U- 13 C, 15 N-labeled histdine demonstrating that 1 H chemical shift tensors can be reliably determined for the 1 H 15 N and 1 H 13 C spin pairs in cationic and neutral forms of histidine. Finally, we demonstrate that the experimental 1 H(C) and 1 H(N) chemical shift tensors are in agreement with Density Functional Theory calculations, therefore establishing the usefulness of our method for characterization of structure and hydrogen bonding environment in organic and biological solids.
Efficient clustering aggregation based on data fragments.
Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing
2012-06-01
Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Sugisaki, Kenji; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji
2017-11-15
Spin-orbit contributions to the zero-field splitting (ZFS) tensor (D SO tensor) of M III (acac) 3 complexes (M = V, Cr, Mn, Fe and Mo; acac = acetylacetonate anion) are evaluated by means of ab initio (a hybrid CASSCF/MRMP2) and DFT (Pederson-Khanna (PK) and natural orbital-based Pederson-Khanna (NOB-PK)) methods, focusing on the behaviour of DFT-based approaches to the D SO tensors against the valence d-electron configurations of the transition metal ions in octahedral coordination. Both the DFT-based approaches reproduce trends in the D tensors. Significantly, the differences between the theoretical and experimental D (D = D ZZ - (D XX + D YY )/2) values are smaller in NOB-PK than in PK, emphasising the usefulness of the natural orbital-based approach to the D tensor calculations of transition metal ion complexes. In the case of d 2 and d 4 electronic configurations, the D SO (NOB-PK) values are considerably underestimated in the absolute magnitude, compared with the experimental ones. The D SO tensor analysis based on the orbital region partitioning technique (ORPT) revealed that the D SO contributions attributed to excitations from the singly occupied region (SOR) to the unoccupied region (UOR) are significantly underestimated in the DFT-based approaches to all the complexes under study. In the case of d 3 and d 5 configurations, the (SOR → UOR) excitations contribute in a nearly isotropic manner, which causes fortuitous error cancellations in the DFT-based D SO values. These results indicate that more efforts to develop DFT frameworks should be directed towards the reproduction of quantitative D SO tensors of transition metal complexes with various electronic configurations and local symmetries around metal ions.
NASA Astrophysics Data System (ADS)
Fisher, Dahlia; Yaniawati, Poppy; Kusumah, Yaya Sukjaya
2017-08-01
This study aims to analyze the character of students who obtain CORE learning model using metacognitive approach. The method in this study is qualitative research and quantitative research design (Mixed Method Design) with concurrent embedded strategy. The research was conducted on two groups: an experimental group and the control group. An experimental group consists of students who had CORE model learning using metacognitive approach while the control group consists of students taught by conventional learning. The study was conducted the object this research is the seventh grader students in one the public junior high schools in Bandung. Based on this research, it is known that the characters of the students in the CORE model learning through metacognitive approach is: honest, hard work, curious, conscientious, creative and communicative. Overall it can be concluded that CORE model learning is good for developing characters of a junior high school student.
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.; ...
2018-01-17
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Hoyt, David W.; Nicora, Carrie D.; Kinmonth-Schultz, Hannah A.; Ward, Joy K.
2018-01-01
We introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS2), and NMR into a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter out the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture, and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS2 approach is well suited to the discovery of new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases. PMID:29342073
Engelhardt, Benjamin; Kschischo, Maik; Fröhlich, Holger
2017-06-01
Ordinary differential equations (ODEs) are a popular approach to quantitatively model molecular networks based on biological knowledge. However, such knowledge is typically restricted. Wrongly modelled biological mechanisms as well as relevant external influence factors that are not included into the model are likely to manifest in major discrepancies between model predictions and experimental data. Finding the exact reasons for such observed discrepancies can be quite challenging in practice. In order to address this issue, we suggest a Bayesian approach to estimate hidden influences in ODE-based models. The method can distinguish between exogenous and endogenous hidden influences. Thus, we can detect wrongly specified as well as missed molecular interactions in the model. We demonstrate the performance of our Bayesian dynamic elastic-net with several ordinary differential equation models from the literature, such as human JAK-STAT signalling, information processing at the erythropoietin receptor, isomerization of liquid α -Pinene, G protein cycling in yeast and UV-B triggered signalling in plants. Moreover, we investigate a set of commonly known network motifs and a gene-regulatory network. Altogether our method supports the modeller in an algorithmic manner to identify possible sources of errors in ODE-based models on the basis of experimental data. © 2017 The Author(s).
Oh, Kenneth J; Cash, Kevin J; Plaxco, Kevin W
2006-11-01
While protein-polypeptide and nucleic acid-polypeptide interactions are of significant experimental interest, quantitative methods for the characterization of such interactions are often cumbersome. Here we described a relatively simple means of optically monitoring such interactions using excimer-based peptide beacons (PBs). The design of PBs is based on the observation that, whereas short peptides are almost invariably unfolded and highly dynamic, they become rigid when complexed with macromolecular targets. Using this binding-induced folding to segregate two pyrene moieties and therefore inhibit excimer formation, we have produced PBs directed against both anti-HIV antibodies and the retroviral transactive response (TAR) RNA hairpin. For both polypeptides, target recognition is accompanied by a roughly 2-fold decrease in excimer emission, thus allowing the detection of their respective targets at concentrations of a few nanomolar. Because excimer emission requires the formation of a tight, precisely oriented pyrene dimer, even relatively trivial binding-induced segregation reduces fluorescence significantly. This suggests that the PB approach will be suitable for monitoring a wide range of peptide-macromolecule recognition events. Moreover, the synthesis of excimer-based PBs utilizes commercially available modified pyrenes in a simple and well-established protocol, making the approach well suited for routine laboratory applications.
Model based manipulator control
NASA Technical Reports Server (NTRS)
Petrosky, Lyman J.; Oppenheim, Irving J.
1989-01-01
The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.
Experimental design to evaluate directed adaptive mutation in Mammalian cells.
Bordonaro, Michael; Chiaro, Christopher R; May, Tobias
2014-12-09
We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. We performed the initial stages of characterizing our system and have limited preliminary data from several pilot experiments. Cell growth and DNA sequence data indicate that we have identified a cell clone that exhibits several suitable characteristics, although further study is required to identify a more optimal cell clone. The experimental approach is based on a quantum biological model of basis-dependent selection describing a novel mechanism of adaptive mutation. This project is currently inactive due to lack of funding. However, consistent with the objective of early reports, we describe a proposed study that has not produced publishable results, but is worthy of report because of the hypothesis, experimental design, and protocols. We outline the project's rationale and experimental design, with its strengths and weaknesses, to stimulate discussion and analysis, and lay the foundation for future studies in this field.
Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Mathematical Modeling.
Barnes, Sean L; Kasaie, Parastu; Anderson, Deverick J; Rubin, Michael
2016-11-01
Mathematical modeling is a valuable methodology used to study healthcare epidemiology and antimicrobial stewardship, particularly when more traditional study approaches are infeasible, unethical, costly, or time consuming. We focus on 2 of the most common types of mathematical modeling, namely compartmental modeling and agent-based modeling, which provide important advantages-such as shorter developmental timelines and opportunities for extensive experimentation-over observational and experimental approaches. We summarize these advantages and disadvantages via specific examples and highlight recent advances in the methodology. A checklist is provided to serve as a guideline in the development of mathematical models in healthcare epidemiology and antimicrobial stewardship. Infect Control Hosp Epidemiol 2016;1-7.
Hartman, Joshua D; Beran, Gregory J O
2014-11-11
First-principles chemical shielding tensor predictions play a critical role in studying molecular crystal structures using nuclear magnetic resonance. Fragment-based electronic structure methods have dramatically improved the ability to model molecular crystal structures and energetics using high-level electronic structure methods. Here, a many-body expansion fragment approach is applied to the calculation of chemical shielding tensors in molecular crystals. First, the impact of truncating the many-body expansion at different orders and the role of electrostatic embedding are examined on a series of molecular clusters extracted from molecular crystals. Second, the ability of these techniques to assign three polymorphic forms of the drug sulfanilamide to the corresponding experimental (13)C spectra is assessed. This challenging example requires discriminating among spectra whose (13)C chemical shifts differ by only a few parts per million (ppm) across the different polymorphs. Fragment-based PBE0/6-311+G(2d,p) level chemical shielding predictions correctly assign these three polymorphs and reproduce the sulfanilamide experimental (13)C chemical shifts with 1 ppm accuracy. The results demonstrate that fragment approaches are competitive with the widely used gauge-invariant projector augmented wave (GIPAW) periodic density functional theory calculations.
A new technique for the characterization of chaff elements
NASA Astrophysics Data System (ADS)
Scholfield, David; Myat, Maung; Dauby, Jason; Fesler, Jonathon; Bright, Jonathan
2011-07-01
A new technique for the experimental characterization of electromagnetic chaff based on Inverse Synthetic Aperture Radar is presented. This technique allows for the characterization of as few as one filament of chaff in a controlled anechoic environment allowing for stability and repeatability of experimental results. This approach allows for a deeper understanding of the fundamental phenomena of electromagnetic scattering from chaff through an incremental analysis approach. Chaff analysis can now begin with a single element and progress through the build-up of particles into pseudo-cloud structures. This controlled incremental approach is supported by an identical incremental modeling and validation process. Additionally, this technique has the potential to produce considerable savings in financial and schedule cost and provides a stable and repeatable experiment to aid model valuation.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
Metainference: A Bayesian inference method for heterogeneous systems
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300
2013-01-01
We report a strategy for structure determination of organic materials in which complete solid-state nuclear magnetic resonance (NMR) spectral data is utilized within the context of structure determination from powder X-ray diffraction (XRD) data. Following determination of the crystal structure from powder XRD data, first-principles density functional theory-based techniques within the GIPAW approach are exploited to calculate the solid-state NMR data for the structure, followed by careful scrutiny of the agreement with experimental solid-state NMR data. The successful application of this approach is demonstrated by structure determination of the 1:1 cocrystal of indomethacin and nicotinamide. The 1H and 13C chemical shifts calculated for the crystal structure determined from the powder XRD data are in excellent agreement with those measured experimentally, notably including the two-dimensional correlation of 1H and 13C chemical shifts for directly bonded 13C–1H moieties. The key feature of this combined approach is that the quality of the structure determined is assessed both against experimental powder XRD data and against experimental solid-state NMR data, thus providing a very robust validation of the veracity of the structure. PMID:24386493
Dudenko, Dmytro V; Williams, P Andrew; Hughes, Colan E; Antzutkin, Oleg N; Velaga, Sitaram P; Brown, Steven P; Harris, Kenneth D M
2013-06-13
We report a strategy for structure determination of organic materials in which complete solid-state nuclear magnetic resonance (NMR) spectral data is utilized within the context of structure determination from powder X-ray diffraction (XRD) data. Following determination of the crystal structure from powder XRD data, first-principles density functional theory-based techniques within the GIPAW approach are exploited to calculate the solid-state NMR data for the structure, followed by careful scrutiny of the agreement with experimental solid-state NMR data. The successful application of this approach is demonstrated by structure determination of the 1:1 cocrystal of indomethacin and nicotinamide. The 1 H and 13 C chemical shifts calculated for the crystal structure determined from the powder XRD data are in excellent agreement with those measured experimentally, notably including the two-dimensional correlation of 1 H and 13 C chemical shifts for directly bonded 13 C- 1 H moieties. The key feature of this combined approach is that the quality of the structure determined is assessed both against experimental powder XRD data and against experimental solid-state NMR data, thus providing a very robust validation of the veracity of the structure.
Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan
2017-12-27
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.
DeSmitt, Holly J; Domire, Zachary J
2016-12-01
Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.
Weak nanoscale chaos and anomalous relaxation in DNA
NASA Astrophysics Data System (ADS)
Mazur, Alexey K.
2017-06-01
Anomalous nonexponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the hydrogen-bond breathing of terminal DNA base pairs exhibits a slow power-law relaxation attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into the base-pair stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close to that in terminal base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked to the photoprobe and that the weak nanoscale chaos can represent an ubiquitous hidden source of nonexponential relaxation in ultrafast spectroscopy.
Weak nanoscale chaos and anomalous relaxation in DNA.
Mazur, Alexey K
2017-06-01
Anomalous nonexponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the hydrogen-bond breathing of terminal DNA base pairs exhibits a slow power-law relaxation attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into the base-pair stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close to that in terminal base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked to the photoprobe and that the weak nanoscale chaos can represent an ubiquitous hidden source of nonexponential relaxation in ultrafast spectroscopy.
Model-Based Estimation of Knee Stiffness
Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.
2013-01-01
During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ramazani, Ali; Mukherjee, Krishnendu; Prahl, Ulrich; Bleck, Wolfgang
2012-10-01
The flow behavior of dual-phase (DP) steels is modeled on the finite-element method (FEM) framework on the microscale, considering the effect of the microstructure through the representative volume element (RVE) approach. Two-dimensional RVEs were created from microstructures of experimentally obtained DP steels with various ferrite grain sizes. The flow behavior of single phases was modeled through the dislocation-based work-hardening approach. The volume change during austenite-to-martensite transformation was modeled, and the resultant prestrained areas in the ferrite were considered to be the storage place of transformation-induced, geometrically necessary dislocations (GNDs). The flow curves of DP steels with varying ferrite grain sizes, but constant martensite fractions, were obtained from the literature. The flow curves of simulations that take into account the GND are in better agreement with those of experimental flow curves compared with those of predictions without consideration of the GND. The experimental results obeyed the Hall-Petch relationship between yield stress and flow stress and the simulations predicted this as well.
Ruel, Jean; Lachance, Geneviève
2010-01-01
This paper presents an experimental study of three bioreactor configurations. The bioreactor is intended to be used for the development of tissue-engineered heart valve substitutes. Therefore it must be able to reproduce physiological flow and pressure waveforms accurately. A detailed analysis of three bioreactor arrangements is presented using mathematical models based on the windkessel (WK) approach. First, a review of the many applications of this approach in medical studies enhances its fundamental nature and its usefulness. Then the models are developed with reference to the actual components of the bioreactor. This study emphasizes different conflicting issues arising in the design process of a bioreactor for biomedical purposes, where an optimization process is essential to reach a compromise satisfying all conditions. Two important aspects are the need for a simple system providing ease of use and long-term sterility, opposed to the need for an advanced (thus more complex) architecture capable of a more accurate reproduction of the physiological environment. Three classic WK architectures are analyzed, and experimental results enhance the advantages and limitations of each one. PMID:21977286
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Robotic Anterior and Midline Skull Base Surgery: Preclinical Investigations
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Bert W.; Weinstein, Gregory S.
Purpose: To develop a minimally invasive surgical technique to access the midline and anterior skull base using the optical and technical advantages of robotic surgical instrumentation. Methods and Materials: Ten experimental procedures focusing on approaches to the nasopharynx, clivus, sphenoid, pituitary sella, and suprasellar regions were performed on one cadaver and one live mongrel dog. Both the cadaver and canine procedures were performed in an approved training facility using the da Vinci Surgical Robot. For the canine experiments, a transoral robotic surgery (TORS) approach was used, and for the cadaver a newly developed combined cervical-transoral robotic surgery (C-TORS) approach wasmore » investigated and compared with standard TORS. The ability to access and dissect tissues within the various areas of the midline and anterior skull base were evaluated, and techniques to enhance visualization and instrumentation were developed. Results: Standard TORS approaches did not provide adequate access to the midline and anterior skull base; however, the newly developed C-TORS approach was successful in providing the surgical access to these regions of the skull base. Conclusion: Robotic surgery is an exciting minimally invasive approach to the skull base that warrants continued preclinical investigation and development.« less
NASA Astrophysics Data System (ADS)
Błaszczuk, Artur; Krzywański, Jarosław
2017-03-01
The interrelation between fuzzy logic and cluster renewal approaches for heat transfer modeling in a circulating fluidized bed (CFB) has been established based on a local furnace data. The furnace data have been measured in a 1296 t/h CFB boiler with low level of flue gas recirculation. In the present study, the bed temperature and suspension density were treated as experimental variables along the furnace height. The measured bed temperature and suspension density were varied in the range of 1131-1156 K and 1.93-6.32 kg/m3, respectively. Using the heat transfer coefficient for commercial CFB combustor, two empirical heat transfer correlation were developed in terms of important operating parameters including bed temperature and also suspension density. The fuzzy logic results were found to be in good agreement with the corresponding experimental heat transfer data obtained based on cluster renewal approach. The predicted bed-to-wall heat transfer coefficient covered a range of 109-241 W/(m2K) and 111-240 W/(m2K), for fuzzy logic and cluster renewal approach respectively. The divergence in calculated heat flux recovery along the furnace height between fuzzy logic and cluster renewal approach did not exceeded ±2%.
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; ...
2015-09-03
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. Here, we present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We also simulated conventional block detector designs with different slotted light guide patterns using the new approachmore » and compared the outcomes with those from GATE simulations. And while the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.« less
Inter-subject phase synchronization for exploratory analysis of task-fMRI.
Bolt, Taylor; Nomi, Jason S; Vij, Shruti G; Chang, Catie; Uddin, Lucina Q
2018-08-01
Analysis of task-based fMRI data is conventionally carried out using a hypothesis-driven approach, where blood-oxygen-level dependent (BOLD) time courses are correlated with a hypothesized temporal structure. In some experimental designs, this temporal structure can be difficult to define. In other cases, experimenters may wish to take a more exploratory, data-driven approach to detecting task-driven BOLD activity. In this study, we demonstrate the efficiency and power of an inter-subject synchronization approach for exploratory analysis of task-based fMRI data. Combining the tools of instantaneous phase synchronization and independent component analysis, we characterize whole-brain task-driven responses in terms of group-wise similarity in temporal signal dynamics of brain networks. We applied this framework to fMRI data collected during performance of a simple motor task and a social cognitive task. Analyses using an inter-subject phase synchronization approach revealed a large number of brain networks that dynamically synchronized to various features of the task, often not predicted by the hypothesized temporal structure of the task. We suggest that this methodological framework, along with readily available tools in the fMRI community, provides a powerful exploratory, data-driven approach for analysis of task-driven BOLD activity. Copyright © 2018 Elsevier Inc. All rights reserved.
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Kopyt, Paweł; Celuch, Małgorzata
2007-01-01
A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.
The Effect of DBAE Approach on Teaching Painting of Undergraduate Art Students
ERIC Educational Resources Information Center
Hedayat, Mina; Kahn, Sabzali Musa; Honarvar, Habibeh; Bakar, Syed Alwi Syed Abu; Samsuddin, Mohd Effindi
2013-01-01
The aim of this study is to implement a new method of teaching painting which uses the Discipline-Based Art Education (DBAE) approach for the undergraduate art students at Tehran University. In the current study, the quasi-experimental method was used to test the hypothesis three times (pre, mid and post-tests). Thirty students from two classes…
A Mindful Approach to Teaching Emotional Intelligence to Undergraduate Students Online and in Person
ERIC Educational Resources Information Center
Cotler, Jami L.; DiTursi, Dan; Goldstein, Ira; Yates, Jeff; DelBelso, Deb
2017-01-01
In this paper we examine whether emotional intelligence (EI) can be taught online and, if so, what key variables influence the successful implementation of this online learning model. Using a 3 x 2 factorial quasi-experimental design, this mixed-methods study found that a team-based learning environment using a blended teaching approach, supported…
Huang, Wenwen; Ebrahimi, Davoud; Dinjaski, Nina; Tarakanova, Anna; Buehler, Markus J; Wong, Joyce Y; Kaplan, David L
2017-04-18
Tailored biomaterials with tunable functional properties are crucial for a variety of task-specific applications ranging from healthcare to sustainable, novel bio-nanodevices. To generate polymeric materials with predictive functional outcomes, exploiting designs from nature while morphing them toward non-natural systems offers an important strategy. Silks are Nature's building blocks and are produced by arthropods for a variety of uses that are essential for their survival. Due to the genetic control of encoded protein sequence, mechanical properties, biocompatibility, and biodegradability, silk proteins have been selected as prototype models to emulate for the tunable designs of biomaterial systems. The bottom up strategy of material design opens important opportunities to create predictive functional outcomes, following the exquisite polymeric templates inspired by silks. Recombinant DNA technology provides a systematic approach to recapitulate, vary, and evaluate the core structure peptide motifs in silks and then biosynthesize silk-based polymers by design. Post-biosynthesis processing allows for another dimension of material design by controlled or assisted assembly. Multiscale modeling, from the theoretical prospective, provides strategies to explore interactions at different length scales, leading to selective material properties. Synergy among experimental and modeling approaches can provide new and more rapid insights into the most appropriate structure-function relationships to pursue while also furthering our understanding in terms of the range of silk-based systems that can be generated. This approach utilizes nature as a blueprint for initial polymer designs with useful functions (e.g., silk fibers) but also employs modeling-guided experiments to expand the initial polymer designs into new domains of functional materials that do not exist in nature. The overall path to these new functional outcomes is greatly accelerated via the integration of modeling with experiment. In this Account, we summarize recent advances in understanding and functionalization of silk-based protein systems, with a focus on the integration of simulation and experiment for biopolymer design. Spider silk was selected as an exemplary protein to address the fundamental challenges in polymer designs, including specific insights into the role of molecular weight, hydrophobic/hydrophilic partitioning, and shear stress for silk fiber formation. To expand current silk designs toward biointerfaces and stimuli responsive materials, peptide modules from other natural proteins were added to silk designs to introduce new functions, exploiting the modular nature of silk proteins and fibrous proteins in general. The integrated approaches explored suggest that protein folding, silk volume fraction, and protein amino acid sequence changes (e.g., mutations) are critical factors for functional biomaterial designs. In summary, the integrated modeling-experimental approach described in this Account suggests a more rationally directed and more rapid method for the design of polymeric materials. It is expected that this combined use of experimental and computational approaches has a broad applicability not only for silk-based systems, but also for other polymer and composite materials.
Li, Linglong; Yang, Yaodong; Zhang, Dawei; ...
2018-03-30
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
NASA Astrophysics Data System (ADS)
Matoušek, Václav; Kesely, Mikoláš; Vlasák, Pavel
2018-06-01
The deposition velocity is an important operation parameter in hydraulic transport of solid particles in pipelines. It represents flow velocity at which transported particles start to settle out at the bottom of the pipe and are no longer transported. A number of predictive models has been developed to determine this threshold velocity for slurry flows of different solids fractions (fractions of different grain size and density). Most of the models consider flow in a horizontal pipe only, modelling approaches for inclined flows are extremely scarce due partially to a lack of experimental information about the effect of pipe inclination on the slurry flow pattern and behaviour. We survey different approaches to modelling of particle deposition in flowing slurry and discuss mechanisms on which deposition-limit models are based. Furthermore, we analyse possibilities to incorporate the effect of flow inclination into the predictive models and select the most appropriate ones based on their ability to modify the modelled deposition mechanisms to conditions associated with the flow inclination. A usefulness of the selected modelling approaches and their modifications are demonstrated by comparing model predictions with experimental results for inclined slurry flows from our own laboratory and from the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Linglong; Yang, Yaodong; Zhang, Dawei
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
Pivato, Alberto; Lavagnolo, Maria Cristina; Manachini, Barbara; Vanin, Stefano; Raga, Roberto; Beggio, Giovanni
2017-04-01
The Italian legislation on contaminated soils does not include the Ecological Risk Assessment (ERA) and this deficiency has important consequences for the sustainable management of agricultural soils. The present research compares the results of two ERA procedures applied to agriculture (i) one based on the "substance-based" approach and (ii) a second based on the "matrix-based" approach. In the former the soil screening values (SVs) for individual substances were derived according to institutional foreign guidelines. In the latter, the SVs characterizing the whole-matrix were derived originally by the authors by means of experimental activity. The results indicate that the "matrix-based" approach can be efficiently implemented in the Italian legislation for the ERA of agricultural soils. This method, if compared to the institutionalized "substance based" approach is (i) comparable in economic terms and in testing time, (ii) is site specific and assesses the real effect of the investigated soil on a battery of bioassays, (iii) accounts for phenomena that may radically modify the exposure of the organisms to the totality of contaminants and (iv) can be considered sufficiently conservative.
Characterizing traffic under uncertain disruptions : an experimental approach.
DOT National Transportation Integrated Search
2013-03-01
The objective of the research is to study long-term traffic patterns under uncertain disruptions using : data collected from human subjects who simultaneously make route choices in controlled PC-based : laboratory experiments. Uncertain disruptions t...
A community-based health education analysis of an infectous disease control program in Nigeria.
Adeyanju, O M
1987-01-01
This descriptive study utilized the strategy of primary health care in program development-especially a community-based health education intervention approach-in the control of guinea-worm in rural communities of Nigeria. Two closely related rural communities in two states served as target groups. Committee system approach, nominal group process, interview methods, audio-visual aids, and health care volunteer trainingship were the educational strategies employed in a control and experimental set up. The PRECEDE model was applied in the analysis. Results show a significant control action on guinea-worm infestation in the experimental community and a tremendous achievement in preventive health education interventions through organized community participation/involvement and ultimate self-reliance and individual responsibility. A positive increase in health knowledge and attitude examined through interview method, and observable changes in health behavior were noticed. Wells were provided, drinking water treated, while personal and community health promotion strategies were encouraged by all. The study has shown the effectiveness/efficacy of a community-based effort facilitated by a health educator.
Faye, Alexandrine; Jacquin-Courtois, Sophie; Osiurak, François
2018-03-01
The purpose of this study was to deepen our understanding of the cognitive bases of human tool use based on the technical reasoning hypothesis (i.e., the reasoning-based approach). This approach assumes that tool use is supported by the ability to reason about an object's physical properties (e.g., length, weight, strength, etc.) to perform mechanical actions (e.g., lever). In this framework, an important issue is to understand whether left-brain-damaged (LBD) individuals with tool-use deficits are still able to estimate the physical object's properties necessary to use the tool. Eleven LBD patients and 12 control participants performed 3 original experimental tasks: Use-Length (visual evaluation of the length of a stick to bring down a target), Visual-Length (to visually compare objects of different lengths) and Addition-Length (to visually compare added lengths). Participants were also tested on conventional tasks: Familiar Tool Use and Mechanical Problem-Solving (novel tools). LBD patients had more difficulties than controls on both conventional tasks. No significant differences were observed for the 3 experimental tasks. These results extend the reasoning-based approach, stressing that it might not be the representation of length that is impaired in LBD patients, but rather the ability to generate mechanical actions based on physical object properties. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mental energy: Assessing the motivation dimension.
Barbuto, John E
2006-07-01
Content-based theories of motivation may best uti lize the meta-theory of work motivation. Process-based theories may benefit most from adopting Locke and Latham's goal-setting approaches and measures. Decision-making theories should utilize the measurement approach operationalized by Ilgen et al. Sustained effort theories should utilize similar approaches to those used in numerous studies of intrinsic motivation, but the measurement of which is typically observational or attitudinal. This paper explored the implications of the four approaches to studying motivation on the newly estab ished model of mental energy. The approach taken for examining motivation informs the measurement of mental energy. Specific recommendations for each approach were developed and provided. As a result of these efforts, it will now be possible to diagnose, measure, and experimentally test for changes in human motivation, which is one of the three major components of mental energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celli, Paolo, E-mail: pcelli@umn.edu; Gonella, Stefano, E-mail: sgonella@umn.edu
2015-08-24
In this letter, we discuss a versatile, fully reconfigurable experimental platform for the investigation of phononic phenomena in metamaterial architectures. The approach revolves around the use of 3D laser vibrometry to reconstruct global and local wavefield features in specimens obtained through simple arrangements of LEGO{sup ®} bricks on a thin baseplate. The agility by which it is possible to reconfigure the brick patterns into a nearly endless spectrum of topologies makes this an effective approach for rapid experimental proof of concept, as well as a powerful didactic tool, in the arena of phononic crystals and metamaterials engineering. We use ourmore » platform to provide a compelling visual illustration of important spatial wave manipulation effects (waveguiding and seismic isolation), and to elucidate fundamental dichotomies between Bragg-based and locally resonant bandgap mechanisms.« less
Sardo, Mariana; Siegel, Renée; Santos, Sérgio M; Rocha, João; Gomes, José R B; Mafra, Luis
2012-06-28
We present a complete set of experimental approaches for the NMR assignment of powdered tripeptide glutathione at natural isotopic abundance, based on J-coupling and dipolar NMR techniques combined with (1)H CRAMPS decoupling. To fully assign the spectra, two-dimensional (2D) high-resolution methods, such as (1)H-(13)C INEPT-HSQC/PRESTO heteronuclear correlations (HETCOR), (1)H-(1)H double-quantum (DQ), and (1)H-(14)N D-HMQC correlation experiments, have been used. To support the interpretation of the experimental data, periodic density functional theory calculations together with the GIPAW approach have been used to calculate the (1)H and (13)C chemical shifts. It is found that the shifts calculated with two popular plane wave codes (CASTEP and Quantum ESPRESSO) are in excellent agreement with the experimental results.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
2017-11-01
Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodionov, S. N.; Martin, J. H.
1999-07-01
A novel approach to climate forecasting on an interannual time scale is described. The approach is based on concepts and techniques from artificial intelligence and expert systems. The suitability of this approach to climate diagnostics and forecasting problems and its advantages compared with conventional forecasting techniques are discussed. The article highlights some practical aspects of the development of climatic expert systems (CESs) and describes an implementation of such a system for the North Atlantic (CESNA). Particular attention is paid to the content of CESNA's knowledge base and those conditions that make climatic forecasts one to several years in advance possible. A detailed evaluation of the quality of the experimental real-time forecasts made by CESNA for the winters of 1995-1996, 1996-1997 and 1997-1998 are presented.
Computational Prediction of Metabolism: Sites, Products, SAR, P450 Enzyme Dynamics, and Mechanisms
2012-01-01
Metabolism of xenobiotics remains a central challenge for the discovery and development of drugs, cosmetics, nutritional supplements, and agrochemicals. Metabolic transformations are frequently related to the incidence of toxic effects that may result from the emergence of reactive species, the systemic accumulation of metabolites, or by induction of metabolic pathways. Experimental investigation of the metabolism of small organic molecules is particularly resource demanding; hence, computational methods are of considerable interest to complement experimental approaches. This review provides a broad overview of structure- and ligand-based computational methods for the prediction of xenobiotic metabolism. Current computational approaches to address xenobiotic metabolism are discussed from three major perspectives: (i) prediction of sites of metabolism (SOMs), (ii) elucidation of potential metabolites and their chemical structures, and (iii) prediction of direct and indirect effects of xenobiotics on metabolizing enzymes, where the focus is on the cytochrome P450 (CYP) superfamily of enzymes, the cardinal xenobiotics metabolizing enzymes. For each of these domains, a variety of approaches and their applications are systematically reviewed, including expert systems, data mining approaches, quantitative structure–activity relationships (QSARs), and machine learning-based methods, pharmacophore-based algorithms, shape-focused techniques, molecular interaction fields (MIFs), reactivity-focused techniques, protein–ligand docking, molecular dynamics (MD) simulations, and combinations of methods. Predictive metabolism is a developing area, and there is still enormous potential for improvement. However, it is clear that the combination of rapidly increasing amounts of available ligand- and structure-related experimental data (in particular, quantitative data) with novel and diverse simulation and modeling approaches is accelerating the development of effective tools for prediction of in vivo metabolism, which is reflected by the diverse and comprehensive data sources and methods for metabolism prediction reviewed here. This review attempts to survey the range and scope of computational methods applied to metabolism prediction and also to compare and contrast their applicability and performance. PMID:22339582
ERIC Educational Resources Information Center
Kubinger, Klaus D.; Wiesflecker, Sabine; Steindl, Renate
2008-01-01
An interview guide for children and adolescents, which is based on systemic therapy, has recently been added to the collection of published instruments for psychological interviews. This article aims to establish the amount of information gained during a psychological investigation using the Systemic-based Interview Guide rather than an intuitive,…
Oligomerization of G protein-coupled receptors: computational methods.
Selent, J; Kaczor, A A
2011-01-01
Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.
Bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm
NASA Astrophysics Data System (ADS)
Huang, Chan; Jin, Shiqun; Xia, Guo
2017-10-01
Light emitting diode (LED) is widely employed in industrial applications and scientific researches. With a spectrometer, the chromaticity of LED can be measured. However, chromaticity shift will occur due to the broadening effects of the spectrometer. In this paper, an approach is put forward to bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm. We compare chromaticity of simulated LED spectra by using the proposed method and differential operator method to bandwidth correction. The experimental results show that the proposed approach achieves an excellent performance in bandwidth correction which proves the effectiveness of the approach. The method has also been tested on true blue LED spectra.
Problem of lunar mascons: An alternative approach
NASA Astrophysics Data System (ADS)
Barenbaum, A. A.; Shpekin, M. I.
2018-01-01
The origin of lunar mascons is discussed on the base of results of the orbital experimental exploration of the Moon by the Gravity Recovery and Interior Laboratory and the Lunar Reconnaissance Orbiter missions. We lead the discussion on the basis of representations of Galactocentric paradigm which links processes in the Solar System and on its planets with the Galaxy influences. The article describes a new approach to the interpretation of the crater data, which takes into account the quasi-periodic bombardments of the Moon by galactic comets. We present a preliminary evaluation of the age of mascons as well as of craters and mares on the Moon based on this approach.
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
Visser, Lennart; Schoonenboom, Judith; Korthagen, Fred A J
2017-01-01
This study reports on the effect of a newly developed 4-week strengths-based training approach to overcome academic procrastination, given to first-year elementary teacher education students ( N = 54). The training was based on a strengths-based approach, in which elements of the cognitive behavioral approach were also used. The purpose of the training was to promote awareness of the personal strengths of students who experience academic procrastination regularly and to teach them how to use their personal strengths in situations in which they usually tend to procrastinate. With a pretest-posttest control group design (two experimental groups: n = 31, control group: n = 23), the effect of the training on academic procrastination was studied after 1, 11, and 24 weeks. Results of a one-way analysis of covariance revealed a significant short-term effect of the training. In the long term (after 11 and 24 weeks), the scores for academic procrastination for the intervention groups remained stable, whereas the scores for academic procrastination for the control group decreased to the same level as those of the intervention groups. The findings of this study suggest that a strengths-based approach can be helpful to students at an early stage of their academic studies to initiate their individual process of dealing with academic procrastination. The findings for the long term show the importance of measuring the outcomes of an intervention not only shortly after the intervention but also in the long term. Further research is needed to find out how the short-term effect can be maintained in the long-term.
Dinh, Duy; Tamine, Lynda; Boubekeur, Fatiha
2013-02-01
The aim of this work is to evaluate a set of indexing and retrieval strategies based on the integration of several biomedical terminologies on the available TREC Genomics collections for an ad hoc information retrieval (IR) task. We propose a multi-terminology based concept extraction approach to selecting best concepts from free text by means of voting techniques. We instantiate this general approach on four terminologies (MeSH, SNOMED, ICD-10 and GO). We particularly focus on the effect of integrating terminologies into a biomedical IR process, and the utility of using voting techniques for combining the extracted concepts from each document in order to provide a list of unique concepts. Experimental studies conducted on the TREC Genomics collections show that our multi-terminology IR approach based on voting techniques are statistically significant compared to the baseline. For example, tested on the 2005 TREC Genomics collection, our multi-terminology based IR approach provides an improvement rate of +6.98% in terms of MAP (mean average precision) (p<0.05) compared to the baseline. In addition, our experimental results show that document expansion using preferred terms in combination with query expansion using terms from top ranked expanded documents improve the biomedical IR effectiveness. We have evaluated several voting models for combining concepts issued from multiple terminologies. Through this study, we presented many factors affecting the effectiveness of biomedical IR system including term weighting, query expansion, and document expansion models. The appropriate combination of those factors could be useful to improve the IR performance. Copyright © 2012 Elsevier B.V. All rights reserved.
The Development of a Fiber Optic Raman Temperature Measurement System for Rocket Flows
NASA Technical Reports Server (NTRS)
Degroot, Wim A.
1992-01-01
A fiberoptic Raman diagnostic system for H2/O2 rocket flows is currently under development. This system is designed for measurement of temperature and major species concentration in the combustion chamber and part of the nozzle of a 100 Newton thrust rocket currently undergoing testing. This paper describes a measurement system based on the spontaneous Raman scattering phenomenon. An analysis of the principles behind the technique is given. Software is developed to measure temperature and major species concentrations by comparing theoretical Raman scattering spectra with experimentally obtained spectra. Equipment selection and experimental approach are summarized. This experimental program is part of a program, which is in progress, to evaluate Navier-Stokes based analyses for this class of rocket.
Experimental confirmation of a PDE-based approach to design of feedback controls
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Silcox, R. J.; Metcalf, Vern L.
1995-01-01
Issues regarding the experimental implementation of partial differential equation based controllers are discussed in this work. While the motivating application involves the reduction of vibration levels for a circular plate through excitation of surface-mounted piezoceramic patches, the general techniques described here will extend to a variety of applications. The initial step is the development of a PDE model which accurately captures the physics of the underlying process. This model is then discretized to yield a vector-valued initial value problem. Optimal control theory is used to determine continuous-time voltages to the patches, and the approximations needed to facilitate discrete time implementation are addressed. Finally, experimental results demonstrating the control of both transient and steady state vibrations through these techniques are presented.
Promoting students' conceptual understanding using STEM-based e-book
NASA Astrophysics Data System (ADS)
Komarudin, U.; Rustaman, N. Y.; Hasanah, L.
2017-05-01
This study aims to examine the effect of Science, Technology, Engineering, and Mathematics (STEM) based e-book in promoting students'conceptual understanding on lever system in human body. The E-book used was the e-book published by National Ministry of Science Education. The research was conducted by a quasi experimental with pretest and posttest design. The subjects consist of two classes of 8th grade junior high school in Pangkalpinang, Indonesia, which were devided into experimental group (n=34) and control group (n=32). The students in experimental group was taught by STEM-based e-book, while the control group learned by non STEM-based e-book. The data was collected by an instrument pretest and postest. Pretest and posttest scored, thenanalyzed using descriptive statistics and independent t-test. The result of independent sample t-test shows that no significant differenceson students' pretest score between control and experimental group. However, there were significant differences on students posttest score and N-gain score between control and experimental group with sig = 0.000(p<0.005). N-gain analysis showsthe higher performance of students who were participated in experimental group (mean = 66.03) higher compared to control group (mean = 47.66) in answering conceptual understanding questions. Based on the results, it can be concluded that STEM-based e-book has positiveimpact in promoting students' understanding on lever system in human body. Therefore this learning approach is potential to be used as an alternative to triger the enhancement of students' understanding in science.
Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101
ERIC Educational Resources Information Center
Dresel, Markus; Rindermann, Heiner
2011-01-01
Counseling instructors using evaluations made by their students has shown to be a fruitful approach to enhancing teaching quality. However, prior experimental studies are questionable in terms of external validity. Therefore, we conducted a non-experimental intervention study in which all of the courses offered by a specific department at a German…
An experimental and theoretical analysis of a foil-air bearing rotor system
NASA Astrophysics Data System (ADS)
Bonello, P.; Hassan, M. F. Bin
2018-01-01
Although there is considerable research on the experimental testing of foil-air bearing (FAB) rotor systems, only a small fraction has been correlated with simulations from a full nonlinear model that links the rotor, air film and foil domains, due to modelling complexity and computational burden. An approach for the simultaneous solution of the three domains as a coupled dynamical system, introduced by the first author and adopted by independent researchers, has recently demonstrated its capability to address this problem. This paper uses this approach, with further developments, in an experimental and theoretical study of a FAB-rotor test rig. The test rig is described in detail, including issues with its commissioning. The theoretical analysis uses a recently introduced modal-based bump foil model that accounts for interaction between the bumps and their inertia. The imposition of pressure constraints on the air film is found to delay the predicted onset of instability speed. The results lend experimental validation to a recent theoretically-based claim that the Gümbel condition may not be appropriate for a practical single-pad FAB. The satisfactory prediction of the salient features of the measured nonlinear behavior shows that the air film is indeed highly influential on the response, in contrast to an earlier finding.
Ahmad, Zaki Uddin; Chao, Bing; Konggidinata, Mas Iwan; Lian, Qiyu; Zappi, Mark E; Gang, Daniel Dianchen
2018-04-27
Numerous research works have been devoted in the adsorption area using experimental approaches. All these approaches are based on trial and error process and extremely time consuming. Molecular simulation technique is a new tool that can be used to design and predict the performance of an adsorbent. This research proposed a simulation technique that can greatly reduce the time in designing the adsorbent. In this study, a new Rhombic ordered mesoporous carbon (OMC) model is proposed and constructed with various pore sizes and oxygen contents using Materials Visualizer Module to optimize the structure of OMC for resorcinol adsorption. The specific surface area, pore volume, small angle X-ray diffraction pattern, and resorcinol adsorption capacity were calculated by Forcite and Sorption module in Materials Studio Package. The simulation results were validated experimentally through synthesizing OMC with different pore sizes and oxygen contents prepared via hard template method employing SBA-15 silica scaffold. Boric acid was used as the pore expanding reagent to synthesize OMC with different pore sizes (from 4.6 to 11.3 nm) and varying oxygen contents (from 11.9% to 17.8%). Based on the simulation and experimental validation, the optimal pore size was found to be 6 nm for maximum adsorption of resorcinol. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wey, Thomas; Liu, Nan-Suey
2008-01-01
This paper at first describes the fluid network approach recently implemented into the National Combustion Code (NCC) for the simulation of transport of aerosols (volatile particles and soot) in the particulate sampling systems. This network-based approach complements the other two approaches already in the NCC, namely, the lower-order temporal approach and the CFD-based approach. The accuracy and the computational costs of these three approaches are then investigated in terms of their application to the prediction of particle losses through sample transmission and distribution lines. Their predictive capabilities are assessed by comparing the computed results with the experimental data. The present work will help establish standard methodologies for measuring the size and concentration of particles in high-temperature, high-velocity jet engine exhaust. Furthermore, the present work also represents the first step of a long term effort of validating physics-based tools for the prediction of aircraft particulate emissions.
Magnetism as indirect tool for carbon content assessment in nickel nanoparticles
NASA Astrophysics Data System (ADS)
Oumellal, Y.; Magnin, Y.; Martínez de Yuso, A.; Aguiar Hualde, J. M.; Amara, H.; Paul-Boncour, V.; Matei Ghimbeu, C.; Malouche, A.; Bichara, C.; Pellenq, R.; Zlotea, C.
2017-12-01
We report a combined experimental and theoretical study to ascertain carbon solubility in nickel nanoparticles embedded into a carbon matrix via the one-pot method. This original approach is based on the experimental characterization of the magnetic properties of Ni at room temperature and Monte Carlo simulations used to calculate the magnetization as a function of C content in Ni nanoparticles. Other commonly used experimental methods fail to accurately determine the chemical analysis of these types of nanoparticles. Thus, we could assess the C content within Ni nanoparticles and it decreases from 8 to around 4 at. % with increasing temperature during the synthesis. This behavior could be related to the catalytic transformation of dissolved C in the Ni particles into graphite layers surrounding the particles at high temperature. The proposed approach is original and easy to implement experimentally since only magnetization measurements at room temperature are needed. Moreover, it can be extended to other types of magnetic nanoparticles dissolving carbon.
Prediction of physical protein protein interactions
NASA Astrophysics Data System (ADS)
Szilágyi, András; Grimm, Vera; Arakaki, Adrián K.; Skolnick, Jeffrey
2005-06-01
Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.
Optimizing Experimental Design for Comparing Models of Brain Function
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-01-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
The Effect of Project Based Learning on Seventh Grade Students' Academic Achievement
ERIC Educational Resources Information Center
Kizkapan, Oktay; Bektas, Oktay
2017-01-01
The purpose of this study is to investigate whether there is a significant effect of project based learning approach on seventh grade students' academic achievement in the structure and properties of matter. In the study, according to the characteristics of quantitative research methods, pretest-posttest control group quasi-experimental design was…
Faculty Adaptation to an Experimental Curriculum.
ERIC Educational Resources Information Center
Moore-West, Maggi; And Others
The adjustment of medical school faculty members to a new curriculum, called problem-based learning, was studied. Nineteen faculty members who taught in both a lecture-based and tutorial program over 2 academic years were surveyed. Besides the teacher-centered approach, the other model of learning was student-centered and could be conducted in…
Effects of Cueing by a Pedagogical Agent in an Instructional Animation: A Cognitive Load Approach
ERIC Educational Resources Information Center
Yung, Hsin I.; Paas, Fred
2015-01-01
This study investigated the effects of a pedagogical agent that cued relevant information in a story-based instructional animation on the cardiovascular system. Based on cognitive load theory, it was expected that the experimental condition with the pedagogical agent would facilitate students to distinguish between relevant and irrelevant…
Improving Moral Reasoning among College Students: A Game-Based Learning Approach
ERIC Educational Resources Information Center
Huang, Wenyeh; Ho, Jonathan C.
2018-01-01
Considering a company's limited time and resources, an effective training method that improves employees' ability to make ethical decision is needed. Based on social cognitive theory, this study proposes that employing games in an ethics training program can help improve moral reasoning through actively engaging learners. The experimental design…
ERIC Educational Resources Information Center
Psycharis, Sarantos
2016-01-01
Computational experiment approach considers models as the fundamental instructional units of Inquiry Based Science and Mathematics Education (IBSE) and STEM Education, where the model take the place of the "classical" experimental set-up and simulation replaces the experiment. Argumentation in IBSE and STEM education is related to the…
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
A metadata approach for clinical data management in translational genomics studies in breast cancer.
Papatheodorou, Irene; Crichton, Charles; Morris, Lorna; Maccallum, Peter; Davies, Jim; Brenton, James D; Caldas, Carlos
2009-11-30
In molecular profiling studies of cancer patients, experimental and clinical data are combined in order to understand the clinical heterogeneity of the disease: clinical information for each subject needs to be linked to tumour samples, macromolecules extracted, and experimental results. This may involve the integration of clinical data sets from several different sources: these data sets may employ different data definitions and some may be incomplete. In this work we employ semantic web techniques developed within the CancerGrid project, in particular the use of metadata elements and logic-based inference to annotate heterogeneous clinical information, integrate and query it. We show how this integration can be achieved automatically, following the declaration of appropriate metadata elements for each clinical data set; we demonstrate the practicality of this approach through application to experimental results and clinical data from five hospitals in the UK and Canada, undertaken as part of the METABRIC project (Molecular Taxonomy of Breast Cancer International Consortium). We describe a metadata approach for managing similarities and differences in clinical datasets in a standardized way that uses Common Data Elements (CDEs). We apply and evaluate the approach by integrating the five different clinical datasets of METABRIC.
Modeling the Structure of Helical Assemblies with Experimental Constraints in Rosetta.
André, Ingemar
2018-01-01
Determining high-resolution structures of proteins with helical symmetry can be challenging due to limitations in experimental data. In such instances, structure-based protein simulations driven by experimental data can provide a valuable approach for building models of helical assemblies. This chapter describes how the Rosetta macromolecular package can be used to model homomeric protein assemblies with helical symmetry in a range of modeling scenarios including energy refinement, symmetrical docking, comparative modeling, and de novo structure prediction. Data-guided structure modeling of helical assemblies with experimental information from electron density, X-ray fiber diffraction, solid-state NMR, and chemical cross-linking mass spectrometry is also described.
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Li, Hao; Kang, Yeyuan; Wen, Zhiping
2018-02-01
Seepage is one of key factors which affect the levee engineering safety. The seepage danger without timely detection and rapid response may likely lead to severe accidents such as seepage failure, slope instability, and even levee break. More than 90 percent of levee break events are caused by the seepage. It is very important for seepage behavior identification to determine accurately saturation line in levee engineering. Furthermore, the location of saturation line has a major impact on slope stability in levee engineering. Considering the structure characteristics and service condition of levee engineering, the distributed optical fiber sensing technology is introduced to implement the real-time observation of saturation line in levee engineering. The distributed optical fiber temperature sensor system (DTS)-based monitoring principle of saturation line in levee engineering is investigated. An experimental platform, which consists of DTS, heating system, water-supply system, auxiliary analysis system and levee model, is designed and constructed. The monitoring experiment of saturation line in levee model is implemented on this platform. According to the experimental results, the numerical relationship between moisture content and thermal conductivity in porous medium is identified. A line heat source-based distributed optical fiber method obtaining the thermal conductivity in porous medium is developed. A DTS-based approach is proposed to monitor the saturation line in levee engineering. The embedment pattern of optical fiber for monitoring saturation line is presented.
Thiyagarajan, S; Geo, V Edwin; Martin, Leenus Jesu; Nagalingam, B
2018-03-22
This experimental study aims to mitigate harmful emissions from a CI engine using bio-energy with carbon capture and storage (BECCS) approach. The engine used for this experimental work is a single cylinder CI engine with a rated power of 5.2 kW at a constant speed of 1500 rpm. The BECCS approach is a combination of plant-based biofuels and carbon capture and storage (CCS) system. The whole investigation was done in four phases: (1) Substituting diesel with Karanja oil methyl ester (KOME) (2) Equal volume blending of Orange oil (ORG) with KOME (3) 20% blending of n-butanol (B) with KOME-ORG blend (4) CCS system with zeolite based non-selective catalytic reduction (NSCR) and mono ethanolamine (MEA) based selective non-catalytic reduction (SNCR) system with KOME-ORG + B20 blend. The experimental results show that substitution of diesel with KOME reduces smoke emission, but increases NO and CO 2 emission. KOME-ORG blend reduces CO 2 and smoke emissions with high NO emission due to combustion improvement. In comparison with the sole combustion of KOME at full load condition, the combination of KOME-ORG + B20 as bio-fuel with zeolite based post-combustion treatment system resulted in a maximum reduction of NO, smoke and CO 2 emission by 41%, 19% and 15% respectively.
Park, Chorong; Song, Misoon; Cho, Belong; Lim, Jaeyoung; Song, Wook; Chang, Heekyung; Park, Yeon-Hwan
2015-04-01
The purpose of this study was to develop a multi-disciplinary self-management intervention based on empowerment theory and to evaluate the effectiveness of the intervention for older adults with chronic illness. A randomized controlled trial design was used with 43 Korean older adults with chronic illness (Experimental group=22, Control group=21). The intervention consisted of two phases: (1) 8-week multi-disciplinary, team guided, group-based health education, exercise session, and individual empowerment counseling, (2) 16-week self-help group activities including weekly exercise and group discussion to maintain acquired self-management skills and problem-solving skills. Baseline, 8-week, and 24-week assessments measured health empowerment, exercise self-efficacy, physical activity, and physical function. Health empowerment, physical activity, and physical function in the experimental group increased significantly compared to the control group over time. Exercise self-efficacy significantly increased in experimental group over time but there was no significant difference between the two groups. The self-management program based on empowerment theory improved health empowerment, physical activity, and physical function in older adults. The study finding suggests that a health empowerment strategy may be an effective approach for older adults with multiple chronic illnesses in terms of achieving a sense of control over their chronic illness and actively engaging self-management.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-01-01
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
Nanotubule and Tour Molecule Based Molecular Electronics: Suggestion for a Hybrid Approach
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Saini, Subhash (Technical Monitor)
1998-01-01
Recent experimental and theoretical attempts and results indicate two distinct broad pathways towards future molecular electronic devices and architectures. The first is the approach via Tour type ladder molecules and their junctions which can be fabricated with solution phase chemical approaches. Second are fullerenes or nanotubules and their junctions which may have better conductance, switching and amplifying characteristics but can not be made through well controlled and defined chemical means. A hybrid approach combining the two pathways to take advantage of the characteristics of both is suggested. Dimension and scale of such devices would be somewhere in between isolated molecule and nanotubule based devices but it maybe possible to use self-assembly towards larger functional and logicalunits.
Bolker, Jessica; Brauckmann, Sabine
2015-06-01
The founding of the Journal of Experimental Zoology in 1904 was inspired by a widespread turn toward experimental biology in the 19th century. The founding editors sought to promote experimental, laboratory-based approaches, particularly in developmental biology. This agenda raised key practical and epistemological questions about how and where to study development: Does the environment matter? How do we know that a cell or embryo isolated to facilitate observation reveals normal developmental processes? How can we integrate descriptive and experimental data? R.G. Harrison, the journal's first editor, grappled with these questions in justifying his use of cell culture to study neural patterning. Others confronted them in different contexts: for example, F.B. Sumner insisted on the primacy of fieldwork in his studies on adaptation, but also performed breeding experiments using wild-collected animals. The work of Harrison, Sumner, and other early contributors exemplified both the power of new techniques, and the meticulous explanation of practice and epistemology that was marshaled to promote experimental approaches. A century later, experimentation is widely viewed as the standard way to study development; yet at the same time, cutting-edge "big data" projects are essentially descriptive, closer to natural history than to the approaches championed by Harrison et al. Thus, the original questions about how and where we can best learn about development are still with us. Examining their history can inform current efforts to incorporate data from experiment and description, lab and field, and a broad range of organisms and disciplines, into an integrated understanding of animal development. © 2015 Wiley Periodicals, Inc.
Zeilinger, Markus; Pichler, Florian; Nics, Lukas; Wadsak, Wolfgang; Spreitzer, Helmut; Hacker, Marcus; Mitterhauser, Markus
2017-12-01
Resolving the kinetic mechanisms of biomolecular interactions have become increasingly important in early-phase drug development. Since traditional in vitro methods belong to dose-dependent assessments, binding kinetics is usually overlooked. The present study aimed at the establishment of two novel experimental approaches for the assessment of binding affinity of both, radiolabelled and non-labelled compounds targeting the A 3 R, based on high-resolution real-time data acquisition of radioligand-receptor binding kinetics. A novel time-resolved competition assay was developed and applied to determine the K i of eight different A 3 R antagonists, using CHO-K1 cells stably expressing the hA 3 R. In addition, a new kinetic real-time cell-binding approach was established to quantify the rate constants k on and k off , as well as the dedicated K d of the A 3 R agonist [ 125 I]-AB-MECA. Furthermore, lipophilicity measurements were conducted to control influences due to physicochemical properties of the used compounds. Two novel real-time cell-binding approaches were successfully developed and established. Both experimental procedures were found to visualize the kinetic binding characteristics with high spatial and temporal resolution, resulting in reliable affinity values, which are in good agreement with values previously reported with traditional methods. Taking into account the lipophilicity of the A 3 R antagonists, no influences on the experimental performance and the resulting affinity were investigated. Both kinetic binding approaches comprise tracer administration and subsequent binding to living cells, expressing the dedicated target protein. Therefore, the experiments resemble better the true in vivo physiological conditions and provide important markers of cellular feedback and biological response.
Applications of a formal approach to decipher discrete genetic networks.
Corblin, Fabien; Fanchon, Eric; Trilling, Laurent
2010-07-20
A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte
2015-01-01
Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426
A community computational challenge to predict the activity of pairs of compounds.
Bansal, Mukesh; Yang, Jichen; Karan, Charles; Menden, Michael P; Costello, James C; Tang, Hao; Xiao, Guanghua; Li, Yajuan; Allen, Jeffrey; Zhong, Rui; Chen, Beibei; Kim, Minsoo; Wang, Tao; Heiser, Laura M; Realubit, Ronald; Mattioli, Michela; Alvarez, Mariano J; Shen, Yao; Gallahan, Daniel; Singer, Dinah; Saez-Rodriguez, Julio; Xie, Yang; Stolovitzky, Gustavo; Califano, Andrea
2014-12-01
Recent therapeutic successes have renewed interest in drug combinations, but experimental screening approaches are costly and often identify only small numbers of synergistic combinations. The DREAM consortium launched an open challenge to foster the development of in silico methods to computationally rank 91 compound pairs, from the most synergistic to the most antagonistic, based on gene-expression profiles of human B cells treated with individual compounds at multiple time points and concentrations. Using scoring metrics based on experimental dose-response curves, we assessed 32 methods (31 community-generated approaches and SynGen), four of which performed significantly better than random guessing. We highlight similarities between the methods. Although the accuracy of predictions was not optimal, we find that computational prediction of compound-pair activity is possible, and that community challenges can be useful to advance the field of in silico compound-synergy prediction.
Assessing noninferiority in a three-arm trial using the Bayesian approach.
Ghosh, Pulak; Nathoo, Farouk; Gönen, Mithat; Tiwari, Ram C
2011-07-10
Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions. Copyright © 2011 John Wiley & Sons, Ltd.
He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei
2012-06-25
Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.
Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function
NASA Astrophysics Data System (ADS)
Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.
2016-09-01
The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.
Demystifying the cytokine network: Mathematical models point the way.
Morel, Penelope A; Lee, Robin E C; Faeder, James R
2017-10-01
Cytokines provide the means by which immune cells communicate with each other and with parenchymal cells. There are over one hundred cytokines and many exist in families that share receptor components and signal transduction pathways, creating complex networks. Reductionist approaches to understanding the role of specific cytokines, through the use of gene-targeted mice, have revealed further complexity in the form of redundancy and pleiotropy in cytokine function. Creating an understanding of the complex interactions between cytokines and their target cells is challenging experimentally. Mathematical and computational modeling provides a robust set of tools by which complex interactions between cytokines can be studied and analyzed, in the process creating novel insights that can be further tested experimentally. This review will discuss and provide examples of the different modeling approaches that have been used to increase our understanding of cytokine networks. This includes discussion of knowledge-based and data-driven modeling approaches and the recent advance in single-cell analysis. The use of modeling to optimize cytokine-based therapies will also be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nonlinear viscoelastic characterization of polymer materials using a dynamic-mechanical methodology
NASA Technical Reports Server (NTRS)
Strganac, Thomas W.; Payne, Debbie Flowers; Biskup, Bruce A.; Letton, Alan
1995-01-01
Polymer materials retrieved from LDEF exhibit nonlinear constitutive behavior; thus the authors present a method to characterize nonlinear viscoelastic behavior using measurements from dynamic (oscillatory) mechanical tests. Frequency-derived measurements are transformed into time-domain properties providing the capability to predict long term material performance without a lengthy experimentation program. Results are presented for thin-film high-performance polymer materials used in the fabrication of high-altitude scientific balloons. Predictions based upon a linear test and analysis approach are shown to deteriorate for moderate to high stress levels expected for extended applications. Tests verify that nonlinear viscoelastic response is induced by large stresses. Hence, an approach is developed in which the stress-dependent behavior is examined in a manner analogous to modeling temperature-dependent behavior with time-temperature correspondence and superposition principles. The development leads to time-stress correspondence and superposition of measurements obtained through dynamic mechanical tests. Predictions of material behavior using measurements based upon linear and nonlinear approaches are compared with experimental results obtained from traditional creep tests. Excellent agreement is shown for the nonlinear model.
An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.
Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun
2017-09-01
The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.
Elmendorf, Sarah C; Henry, Gregory H R; Hollister, Robert D; Fosaa, Anna Maria; Gould, William A; Hermanutz, Luise; Hofgaard, Annika; Jónsdóttir, Ingibjörg S; Jónsdóttir, Ingibjörg I; Jorgenson, Janet C; Lévesque, Esther; Magnusson, Borgþór; Molau, Ulf; Myers-Smith, Isla H; Oberbauer, Steven F; Rixen, Christian; Tweedie, Craig E; Walker, Marilyn D; Walker, Marilyn
2015-01-13
Inference about future climate change impacts typically relies on one of three approaches: manipulative experiments, historical comparisons (broadly defined to include monitoring the response to ambient climate fluctuations using repeat sampling of plots, dendroecology, and paleoecology techniques), and space-for-time substitutions derived from sampling along environmental gradients. Potential limitations of all three approaches are recognized. Here we address the congruence among these three main approaches by comparing the degree to which tundra plant community composition changes (i) in response to in situ experimental warming, (ii) with interannual variability in summer temperature within sites, and (iii) over spatial gradients in summer temperature. We analyzed changes in plant community composition from repeat sampling (85 plant communities in 28 regions) and experimental warming studies (28 experiments in 14 regions) throughout arctic and alpine North America and Europe. Increases in the relative abundance of species with a warmer thermal niche were observed in response to warmer summer temperatures using all three methods; however, effect sizes were greater over broad-scale spatial gradients relative to either temporal variability in summer temperature within a site or summer temperature increases induced by experimental warming. The effect sizes for change over time within a site and with experimental warming were nearly identical. These results support the view that inferences based on space-for-time substitution overestimate the magnitude of responses to contemporary climate warming, because spatial gradients reflect long-term processes. In contrast, in situ experimental warming and monitoring approaches yield consistent estimates of the magnitude of response of plant communities to climate warming.
Li, Wei; Liu, Jian Guo; Zhu, Ning Hua
2015-04-15
We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.
A survey and analysis of experimental hydrogen sensors
NASA Technical Reports Server (NTRS)
Hunter, Gary W.
1992-01-01
In order to ascertain the applicability of hydrogen sensors to aerospace applications, a survey was conducted of promising experimental point-contact hydrogen sensors and their operation was analyzed. The techniques discussed are metal-oxide-semiconductor or MOS based sensors, catalytic resistor sensors, acoustic wave detectors, and pyroelectric detectors. All of these sensors depend on the interaction of hydrogen with Pd or a Pd-alloy. It is concluded that no single technique will meet the needs of aerospace applications but a combination of approaches is necessary. The most promising combination is an MOS based sensor with a catalytic resistor.
A systems-based food safety evaluation: an experimental approach.
Higgins, Charles L; Hartfield, Barry S
2004-11-01
Food establishments are complex systems with inputs, subsystems, underlying forces that affect the system, outputs, and feedback. Building on past exploration of the hazard analysis critical control point concept and Ludwig von Bertalanffy General Systems Theory, the National Park Service (NPS) is attempting to translate these ideas into a realistic field assessment of food service establishments and to use information gathered by these methods in efforts to improve food safety. Over the course of the last two years, an experimental systems-based methodology has been drafted, developed, and tested by the NPS Public Health Program. This methodology is described in this paper.
Wimmer, Lena; Bellingrath, Silja; von Stockhausen, Lisa
2016-01-01
The present paper reports a pilot study which tested cognitive effects of mindfulness practice in a theory-driven approach. Thirty-four fifth graders received either a mindfulness training which was based on the mindfulness-based stress reduction approach (experimental group), a concentration training (active control group), or no treatment (passive control group). Based on the operational definition of mindfulness by Bishop et al. (2004), effects on sustained attention, cognitive flexibility, cognitive inhibition, and data-driven as opposed to schema-based information processing were predicted. These abilities were assessed in a pre-post design by means of a vigilance test, a reversible figures test, the Wisconsin Card Sorting Test, a Stroop test, a visual search task, and a recognition task of prototypical faces. Results suggest that the mindfulness training specifically improved cognitive inhibition and data-driven information processing. PMID:27462287
Wimmer, Lena; Bellingrath, Silja; von Stockhausen, Lisa
2016-01-01
The present paper reports a pilot study which tested cognitive effects of mindfulness practice in a theory-driven approach. Thirty-four fifth graders received either a mindfulness training which was based on the mindfulness-based stress reduction approach (experimental group), a concentration training (active control group), or no treatment (passive control group). Based on the operational definition of mindfulness by Bishop et al. (2004), effects on sustained attention, cognitive flexibility, cognitive inhibition, and data-driven as opposed to schema-based information processing were predicted. These abilities were assessed in a pre-post design by means of a vigilance test, a reversible figures test, the Wisconsin Card Sorting Test, a Stroop test, a visual search task, and a recognition task of prototypical faces. Results suggest that the mindfulness training specifically improved cognitive inhibition and data-driven information processing.
Biophysics of cadherin adhesion.
Leckband, Deborah; Sivasankar, Sanjeevi
2012-01-01
Since the identification of cadherins and the publication of the first crystal structures, the mechanism of cadherin adhesion, and the underlying structural basis have been studied with a number of different experimental techniques, different classical cadherin subtypes, and cadherin fragments. Earlier studies based on biophysical measurements and structure determinations resulted in seemingly contradictory findings regarding cadherin adhesion. However, recent experimental data increasingly reveal parallels between structures, solution binding data, and adhesion-based biophysical measurements that are beginning to both reconcile apparent differences and generate a more comprehensive model of cadherin-mediated cell adhesion. This chapter summarizes the functional, structural, and biophysical findings relevant to cadherin junction assembly and adhesion. We emphasize emerging parallels between findings obtained with different experimental approaches. Although none of the current models accounts for all of the available experimental and structural data, this chapter discusses possible origins of apparent discrepancies, highlights remaining gaps in current knowledge, and proposes challenges for further study.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Solar energy program evaluation: an introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
deLeon, P.
The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
Choi, Bryan; Asselin, Nicholas; Pettit, Catherine C; Dannecker, Max; Machan, Jason T; Merck, Derek L; Merck, Lisa H; Suner, Selim; Williams, Kenneth A; Jay, Gregory D; Kobayashi, Leo
2016-12-01
Effective resuscitation of out-of-hospital cardiac arrest (OHCA) patients is challenging. Alternative resuscitative approaches using electromechanical adjuncts may improve provider performance. Investigators applied simulation to study the effect of an experimental automation-assisted, goal-directed OHCA management protocol on EMS providers' resuscitation performance relative to standard protocols and equipment. Two-provider (emergency medical technicians (EMT)-B and EMT-I/C/P) teams were randomized to control or experimental group. Each team engaged in 3 simulations: baseline simulation (standard roles); repeat simulation (standard roles); and abbreviated repeat simulation (reversed roles, i.e., basic life support provider performing ALS tasks). Control teams used standard OHCA protocols and equipment (with high-performance cardiopulmonary resuscitation training intervention); for second and third simulations, experimental teams performed chest compression, defibrillation, airway, pulmonary ventilation, vascular access, medication, and transport tasks with goal-directed protocol and resuscitation-automating devices. Videorecorders and simulator logs collected resuscitation data. Ten control and 10 experimental teams comprised 20 EMT-B's; 1 EMT-I, 8 EMT-C's, and 11 EMT-P's; study groups were not fully matched. Both groups suboptimally performed chest compressions and ventilations at baseline. For their second simulations, control teams performed similarly except for reduced on-scene time, and experimental teams improved their chest compressions (P=0.03), pulmonary ventilations (P<0.01), and medication administration (P=0.02); changes in their performance of chest compression, defibrillation, airway, and transport tasks did not attain significance against control teams' changes. Experimental teams maintained performance improvements during reversed-role simulations. Simulation-based investigation into OHCA resuscitation revealed considerable variability and improvable deficiencies in small EMS teams. Goal-directed, automation-assisted OHCA management augmented select resuscitation bundle element performance without comprehensive improvement.
Nanotechnology Review: Molecular Electronics to Molecular Motors
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Saini, Subhash (Technical Monitor)
1998-01-01
Reviewing the status of current approaches and future projections, as already published in scientific journals and books, the talk will summarize the direction in which computational and experimental nanotechnologies are progressing. Examples of nanotechnological approaches to the concepts of design and simulation of carbon nanotube based molecular electronic and mechanical devices will be presented. The concepts of nanotube based gears and motors will be discussed. The above is a non-technical review talk which covers long term precompetitive basic research in already published material that has been presented before many US scientific meeting audiences.
Data Driven Model Development for the Supersonic Semispan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
We investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models and a data-driven system identification procedure. It is shown via analysis of experimental Super- Sonic SemiSpan Transport (S4T) wind-tunnel data using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
A design approach for systems based on magnetic pulse compression.
Kumar, D Durga Praveen; Mitra, S; Senthil, K; Sharma, D K; Rajan, Rehim N; Sharma, Archana; Nagesh, K V; Chakravarthy, D P
2008-04-01
A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results.
Applications of the CRISPR-Cas9 system in cancer biology
Sánchez-Rivera, Francisco J.; Jacks, Tyler
2015-01-01
Preface The prokaryotic type II clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 system is rapidly revolutionizing the field of genetic engineering, allowing researchers to alter the genomes of a large variety of organisms with relative ease. Experimental approaches based on this versatile technology have the potential to transform the field of cancer genetics. Here we review current approaches based on CRISPR-Cas9 for functional studies of cancer genes, with emphasis on its applicability for the development of the next-generation models of human cancer. PMID:26040603
Vrahatis, Aristidis G; Rapti, Angeliki; Sioutas, Spyros; Tsakalidis, Athanasios
2017-01-01
In the era of Systems Biology and growing flow of omics experimental data from high throughput techniques, experimentalists are in need of more precise pathway-based tools to unravel the inherent complexity of diseases and biological processes. Subpathway-based approaches are the emerging generation of pathway-based analysis elucidating the biological mechanisms under the perspective of local topologies onto a complex pathway network. Towards this orientation, we developed PerSub, a graph-based algorithm which detects subpathways perturbed by a complex disease. The perturbations are imprinted through differentially expressed and co-expressed subpathways as recorded by RNA-seq experiments. Our novel algorithm is applied on data obtained from a real experimental study and the identified subpathways provide biological evidence for the brain aging.
ERIC Educational Resources Information Center
Besson, Ugo; Borghi, Lidia; De Ambrosis, Anna; Mascheretti, Paolo
2010-01-01
We have developed a teaching-learning sequence (TLS) on friction based on a preliminary study involving three dimensions: an analysis of didactic research on the topic, an overview of usual approaches, and a critical analysis of the subject, considered also in its historical development. We found that mostly the usual presentations do not take…
An Agent-based Approach to Evaluating the Impact of Technologies on C2
2006-06-01
from field experimentation and current military doctrine were identified for the evaluation of SPEYES technologies , which we aligned with field test...and procedures (TTPs). However, the introduction of new technologies to support C2 significantly impacts performance and effectiveness of military ...addressed various challenges of Military Operations in Urban Terrain (MOUT). Our novel approach combined the strengths of field assessment with
On the question of instabilities upstream of cylindrical bodies
NASA Technical Reports Server (NTRS)
Morkovin, M. V.
1979-01-01
In an attempt to understand the unsteady vortical phenomena in perturbed stagnation regions of cylindrical bodies, a critical review of the theoretical and experimental evidence was made. Current theory is revealed to be incomplete, incorrect, or inapplicable to the phenomena observed experimentally. The formalistic approach via the principle of exchange of instabilities should most likely be replaced by a forced-disturbance approach. Also, many false conclusions were reached by ignoring that treatment of the base and perturbed flows in Hiemenz coordinate eta is asymptotic in nature. Almost surely the techniques of matched asymptotic expansions are expected to be used to capture correctly the diffusive and vorticity amplifying processes of the disturbances regarding the mean-flow boundary layer and outer potential field as eta and y/diameter approach infinity. The serious uncertainties in the experiments are discussed in detail.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
NASA Astrophysics Data System (ADS)
Wu, Yu; Zhang, Hongpeng
2017-12-01
A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.
NASA Astrophysics Data System (ADS)
Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello
2018-03-01
An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.
A novel approach to the analysis of squeezed-film air damping in microelectromechanical systems
NASA Astrophysics Data System (ADS)
Yang, Weilin; Li, Hongxia; Chatterjee, Aveek N.; Elfadel, Ibrahim (Abe M.; Ender Ocak, Ilker; Zhang, TieJun
2017-01-01
Squeezed-film damping (SFD) is a phenomenon that significantly affects the performance of micro-electro-mechanical systems (MEMS). The total damping force in MEMS mainly include the viscous damping force and elastic damping force. Quality factor (Q factor) is usually used to evaluate the damping in MEMS. In this work, we measure the Q factor of a resonator through experiments in a wide range of pressure levels. In fact, experimental characterizations of MEMS have some limitations because it is difficult to conduct experiments at very high vacuum and also hard to differentiate the damping mechanisms from the overall Q factor measurements. On the other hand, classical theoretical analysis of SFD is restricted to strong assumptions and simple geometries. In this paper, a novel numerical approach, which is based on lattice Boltzmann simulations, is proposed to investigate SFD in MEMS. Our method considers the dynamics of squeezed air flow as well as fluid-solid interactions in MEMS. It is demonstrated that Q factor can be directly predicted by numerical simulation, and our simulation results agree well with experimental data. Factors that influence SFD, such as pressure, oscillating amplitude, and driving frequency, are investigated separately. Furthermore, viscous damping and elastic damping forces are quantitatively compared based on comprehensive simulation. The proposed numerical approach as well as experimental characterization enables us to reveal the insightful physics of squeezed-film air damping in MEMS.
Autonomy in Science Education: A Practical Approach in Attitude Shifting Towards Science Learning
NASA Astrophysics Data System (ADS)
Jalil, Pasl A.; Abu Sbeih, M. Z.; Boujettif, M.; Barakat, R.
2009-12-01
This work describes a 2-year study in teaching school science, based on the stimulation of higher thinking levels in learning science using a highly student-centred and constructivist learning approach. We sought to shift and strengthen students' positive attitudes towards science learning, self-efficacy towards invention, and achievement. Focusing on an important aspect of student's positive attitude towards learning, their preference (like/dislike) towards independent study with minimal or no teacher interference, which leads to increased learning autonomy, was investigated. The main research was conducted on elementary school students; 271 grade level one (G1; 6 years old) to grade level four (G4; 10 years old) participated in this study. As a result of this study, it was found that: (1) 73% of the students preferred minimal or no explanation at all, favoring to be left with the challenge of finding out what to do, compared to 20% of the control group, indicating a positive attitude shift in their learning approaches. (2) The experimental group achieved slightly more (9.5% difference) than the control group in knowledge-comprehension-level based exam; however, the experimental group scored much higher (63% difference) in challenging exams which required higher thinking levels. (3) The same trend was also observed in self-efficacy toward invention, where 82% of the experimental group saw themselves as possible inventors compared to 37% of the control group.
Optical microwave filter based on spectral slicing by use of arrayed waveguide gratings.
Pastor, Daniel; Ortega, Beatriz; Capmany, José; Sales, Salvador; Martinez, Alfonso; Muñoz, Pascual
2003-10-01
We have experimentally demonstrated a new optical signal processor based on the use of arrayed waveguide gratings. The structure exploits the concept of spectral slicing combined with the use of an optical dispersive medium. The approach presents increased flexibility from previous slicing-based structures in terms of tunability, reconfiguration, and apodization of the samples or coefficients of the transversal optical filter.
ERIC Educational Resources Information Center
Liu, YuFing
2013-01-01
This paper applies a quasi-experimental research method to compare the difference in students' approaches to learning and their learning achievements between the group that follows the problem based learning (PBL) teaching method with computer support and the group that follows the non-PBL teaching methods. The study sample consisted of 68 junior…
NASA Astrophysics Data System (ADS)
Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.
2017-03-01
In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Han-Wei; Rode, Johann C.; Choudhary, Prateek
2014-01-21
The DC current gain in In{sub 0.53}Ga{sub 0.47}As/InP double-heterojunction bipolar transistors is computed based on a drift-diffusion model, and is compared with experimental data. Even in the absence of other scaling effects, lateral diffusion of electrons to the base Ohmic contacts causes a rapid reduction in DC current gain as the emitter junction width and emitter-base contact spacing are reduced. The simulation and experimental data are compared in order to examine the effect of carrier lateral diffusion on current gain. The impact on current gain due to device scaling and approaches to increase current gain are discussed.
A Cancer Gene Selection Algorithm Based on the K-S Test and CFS.
Su, Qiang; Wang, Yina; Jiang, Xiaobing; Chen, Fuxue; Lu, Wen-Cong
2017-01-01
To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S) test and correlation-based feature selection (CFS) principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. We adopted support vector machines (SVM) as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR), and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.
A support vector machine based control application to the experimental three-tank system.
Iplikci, Serdar
2010-07-01
This paper presents a support vector machine (SVM) approach to generalized predictive control (GPC) of multiple-input multiple-output (MIMO) nonlinear systems. The possession of higher generalization potential and at the same time avoidance of getting stuck into the local minima have motivated us to employ SVM algorithms for modeling MIMO systems. Based on the SVM model, detailed and compact formulations for calculating predictions and gradient information, which are used in the computation of the optimal control action, are given in the paper. The proposed MIMO SVM-based GPC method has been verified on an experimental three-tank liquid level control system. Experimental results have shown that the proposed method can handle the control task successfully for different reference trajectories. Moreover, a detailed discussion on data gathering, model selection and effects of the control parameters have been given in this paper. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Student use of model-based reasoning when troubleshooting an electronic circuit
NASA Astrophysics Data System (ADS)
Lewandowski, Heather; Stetzer, Mackenzie; van de Bogart, Kevin; Dounas-Frazer, Dimitri
2016-03-01
Troubleshooting systems is an integral part of experimental physics in both research and educational settings. Accordingly, ability to troubleshoot is an important learning goal for undergraduate physics lab courses. We investigate students' model-based reasoning on a troubleshooting task using data collected in think-aloud interviews during which pairs of students from two institutions attempted to diagnose and repair a malfunctioning circuit. Our analysis scheme was informed by the Experimental Modeling Framework, which describes physicists' use of mathematical and conceptual models when reasoning about experimental systems. We show that system and subsystem models were crucial for the evaluation of repairs to the circuit and played an important role in some troubleshooting strategies. Finally, drawing on data from interviews with electronics instructors from a broad range of institution types, we outline recommendations for model-based approaches to teaching and learning troubleshooting skills.
Student use of model-based reasoning when troubleshooting an electric circuit
NASA Astrophysics Data System (ADS)
Dounas-Frazer, Dimitri
2016-05-01
Troubleshooting systems is an integral part of experimental physics in both research and educational settings. Accordingly, ability to troubleshoot is an important learning goal for undergraduate physics lab courses. We investigate students' model-based reasoning on a troubleshooting task using data collected in think-aloud interviews during which pairs of students from two institutions attempted to diagnose and repair a malfunctioning circuit. Our analysis scheme was informed by the Experimental Modeling Framework, which describes physicists' use of mathematical and conceptual models when reasoning about experimental systems. We show that system and subsystem models were crucial for the evaluation of repairs to the circuit and played an important role in some troubleshooting strategies. Finally, drawing on data from interviews with electronics instructors from a broad range of institution types, we outline recommendations for model-based approaches to teaching and learning troubleshooting skills.
An approach for cooling by solar energy
NASA Astrophysics Data System (ADS)
Rabeih, S. M.; Wahhab, M. A.; Asfour, H. M.
The present investigation is concerned with the possibility to base the operation of a household refrigerator on solar energy instead of gas fuel. The currently employed heating system is to be replaced by a solar collector with an absorption area of two sq m. Attention is given to the required changes in the generator design, the solar parameters at the location of refrigerator installation, the mathematical approach for the thermal analysis of the solar collector, the development of a computer program for the evaluation of the important parameters, the experimental test rig, and the measurement of the experimental parameters. A description is given of the obtained optimum operating conditions for the considered system.
Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol
2007-11-27
A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries.
Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol
2007-01-01
Background A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Results Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Conclusion Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries. PMID:18047705
2012-10-01
13 Based on the limited work done, the best reported ORR chalcogenide electrocatalysts for PEMFC applications can be ranked as follows: MoRuSe... PEMFC catalysts is the durability of the catalyst particles. Particle size distribution tends to shift towards larger particles during the...the design of new materials for applications in PEMFCs . Reference: A more detailed treatment of the topics of this section, Experimental Target 11
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Spry, David J.; Chen, Liangyu
2015-01-01
This work reports a theoretical and experimental study of 4H-SiC JFET threshold voltage as a function of substrate body bias, device position on the wafer, and temperature from 25 C (298K) to 500 C (773K). Based on these results, an alternative approach to SPICE circuit simulation of body effect for SiC JFETs is proposed.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Le, Long N; Jones, Douglas L
2018-03-01
Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
Microtomography imaging of an isolated plant fiber: a digital holographic approach.
Malek, Mokrane; Khelfa, Haithem; Picart, Pascal; Mounier, Denis; Poilâne, Christophe
2016-01-20
This paper describes a method for optical projection tomography for the 3D in situ characterization of micrometric plant fibers. The proposed approach is based on digital holographic microscopy, the holographic capability being convenient to compensate for the runout of the fiber during rotations. The setup requires a telecentric alignment to prevent from the changes in the optical magnification, and calibration results show the very good experimental adjustment. Amplitude images are obtained from the set of recorded and digitally processed holograms. Refocusing of blurred images and correction of both runout and jitter are carried out to get appropriate amplitude images. The 3D data related to the plant fiber are computed from the set of images using a dedicated numerical processing. Experimental results exhibit the internal and external shapes of the plant fiber. These experimental results constitute the first attempt to obtain 3D data of flax fiber, about 12 μm×17 μm in apparent diameter, with a full-field optical tomography approach using light in the visible range.
Preface of the special issue quantum foundations: information approach
2016-01-01
This special issue is based on the contributions of a group of top experts in quantum foundations and quantum information and probability. It enlightens a number of interpretational, mathematical and experimental problems of quantum theory. PMID:27091161
Designing and Implementing a Constructivist Chemistry Laboratory Program.
ERIC Educational Resources Information Center
Blakely, Alan
2000-01-01
Describes a constructivist chemistry laboratory approach based on students' personal experiences where students had the opportunity to develop their own experimental processes. Points out both the fruitfulness and difficulties of using a graduate student as a teaching assistant. (YDS)
ERIC Educational Resources Information Center
Scarr, Sandra
1995-01-01
Argues that Gottlieb rejects population sampling and statistical analyses of distributions as he proposes that his experimental brand of mechanistic science is the only legitimate approach to developmental research. Maintains that Gottlieb exaggerates developmental uncertainty, based on his own research with extreme environmental manipulations.…
Numerical and experimental validation of a particle Galerkin method for metal grinding simulation
NASA Astrophysics Data System (ADS)
Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng
2018-03-01
In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
Meier, Matthias; Jakub, Zdeněk; Balajka, Jan; Hulva, Jan; Bliem, Roland; Thakur, Pardeep K.; Lee, Tien-Lin; Franchini, Cesare; Schmid, Michael; Diebold, Ulrike; Allegretti, Francesco; Parkinson, Gareth S.
2018-01-01
Accurately modelling the structure of a catalyst is a fundamental prerequisite for correctly predicting reaction pathways, but a lack of clear experimental benchmarks makes it difficult to determine the optimal theoretical approach. Here, we utilize the normal incidence X-ray standing wave (NIXSW) technique to precisely determine the three dimensional geometry of Ag1 and Cu1 adatoms on Fe3O4(001). Both adatoms occupy bulk-continuation cation sites, but with a markedly different height above the surface (0.43 ± 0.03 Å (Cu1) and 0.96 ± 0.03 Å (Ag1)). HSE-based calculations accurately predict the experimental geometry, but the more common PBE + U and PBEsol + U approaches perform poorly. PMID:29334395
Tool use and affordance: Manipulation-based versus reasoning-based approaches.
Osiurak, François; Badets, Arnaud
2016-10-01
Tool use is a defining feature of human species. Therefore, a fundamental issue is to understand the cognitive bases of human tool use. Given that people cannot use tools without manipulating them, proponents of the manipulation-based approach have argued that tool use might be supported by the simulation of past sensorimotor experiences, also sometimes called affordances. However, in the meanwhile, evidence has been accumulated demonstrating the critical role of mechanical knowledge in tool use (i.e., the reasoning-based approach). The major goal of the present article is to examine the validity of the assumptions derived from the manipulation-based versus the reasoning-based approach. To do so, we identified 3 key issues on which the 2 approaches differ, namely, (a) the reference frame issue, (b) the intention issue, and (c) the action domain issue. These different issues will be addressed in light of studies in experimental psychology and neuropsychology that have provided valuable contributions to the topic (i.e., tool-use interaction, orientation effect, object-size effect, utilization behavior and anarchic hand, tool use and perception, apraxia of tool use, transport vs. use actions). To anticipate our conclusions, the reasoning-based approach seems to be promising for understanding the current literature, even if it is not fully satisfactory because of a certain number of findings easier to interpret with regard to the manipulation-based approach. A new avenue for future research might be to develop a framework accommodating both approaches, thereby shedding a new light on the cognitive bases of human tool use and affordances. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Wang, Tzu-Hua
2010-01-01
This research combines the idea of cake format dynamic assessment defined by Sternberg and Grigorenko (2001) and the "graduated prompt approach" proposed by (Campione and Brown, 1985) and (Campione and Brown, 1987) to develop a multiple-choice Web-based dynamic assessment system. This research adopts a quasi-experimental design to…
Problem-Based Learning to Foster Deep Learning in Preservice Geography Teacher Education
ERIC Educational Resources Information Center
Golightly, Aubrey; Raath, Schalk
2015-01-01
In South Africa, geography education students' approach to deep learning has received little attention. Therefore the purpose of this one-shot experimental case study was to evaluate the extent to which first-year geography education students used deep or surface learning in an embedded problem-based learning (PBL) format. The researchers measured…
The Impact of Task-Based Approach on Vocabulary Learning in ESP Courses
ERIC Educational Resources Information Center
Sarani, Abdullah; Sahebi, Leila Farzaneh
2012-01-01
This study investigates the teaching of vocabulary in ESP courses within the paradigm of task-based language teaching, concentrating on Persian literature students at Birjand University in Iran. Two homogenous groups of students who were taking their ESP courses participated in the study as a control and an experimental group. A teacher-made test…
ERIC Educational Resources Information Center
Wu, Jinlu
2013-01-01
Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a "mutation" method in a molecular genetics laboratory course. Students could…
Culturally Based Math Education as a Way to Improve Alaska Native Students' Math Performance.
ERIC Educational Resources Information Center
Lipka, Jerry; Adams, Barbara
2004-01-01
Culturally based instruction has long been touted as a preferred approach to improving the performance of American Indian and Alaska Native (AI/AN) students? academic performance. However, there has been scant research to support this conjecture, particularly when quantitative data and quasi-experimental designs are included. The results of this…
NASA Astrophysics Data System (ADS)
Chen, Tian-Yu; Chen, Yang; Yang, Hu-Jiang; Xiao, Jing-Hua; Hu, Gang
2018-03-01
Nowadays, massive amounts of data have been accumulated in various and wide fields, it has become today one of the central issues in interdisciplinary fields to analyze existing data and extract as much useful information as possible from data. It is often that the output data of systems are measurable while dynamic structures producing these data are hidden, and thus studies to reveal system structures by analyzing available data, i.e., reconstructions of systems become one of the most important tasks of information extractions. In the past, most of the works in this respect were based on theoretical analyses and numerical verifications. Direct analyses of experimental data are very rare. In physical science, most of the analyses of experimental setups were based on the first principles of physics laws, i.e., so-called top-down analyses. In this paper, we conducted an experiment of “Boer resonant instrument for forced vibration” (BRIFV) and inferred the dynamic structure of the experimental set purely from the analysis of the measurable experimental data, i.e., by applying the bottom-up strategy. Dynamics of the experimental set is strongly nonlinear and chaotic, and itʼs subjects to inevitable noises. We proposed to use high-order correlation computations to treat nonlinear dynamics; use two-time correlations to treat noise effects. By applying these approaches, we have successfully reconstructed the structure of the experimental setup, and the dynamic system reconstructed with the measured data reproduces good experimental results in a wide range of parameters.
Electronic properties of a molecular system with Platinum
NASA Astrophysics Data System (ADS)
Ojeda, J. H.; Medina, F. G.; Becerra-Alonso, David
2017-10-01
The electronic properties are studied using a finite homogeneous molecule called Trans-platinum-linked oligo(tetraethenylethenes). This system is composed of individual molecules such as benzene rings, platinum, Phosphore and Sulfur. The mechanism for the study of the electron transport through this system is based on placing the molecule between metal contacts to control the current through the molecular system. We study this molecule based on the tight-binding approach for the calculation of the transport properties using the Landauer-Büttiker formalism and the Fischer-Lee relationship, based on a semi-analytic Green's function method within a real-space renormalization approach. Our results show a significant agreement with experimental measurements.
High-Throughput Experimental Approach Capabilities | Materials Science |
NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
An improved advertising CTR prediction approach based on the fuzzy deep neural network
Gao, Shu; Li, Mingjiang
2018-01-01
Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise. PMID:29727443
Automated Analysis of siRNA Screens of Virus Infected Cells Based on Immunofluorescence Microscopy
NASA Astrophysics Data System (ADS)
Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl
We present an image analysis approach as part of a high-throughput microscopy screening system based on cell arrays for the identification of genes involved in Hepatitis C and Dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in cells, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behavior of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.
A discrete element method-based approach to predict the breakage of coal
Gupta, Varun; Sun, Xin; Xu, Wei; ...
2017-08-05
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Sensorless Load Torque Estimation and Passivity Based Control of Buck Converter Fed DC Motor
Kumar, S. Ganesh; Thilagar, S. Hosimin
2015-01-01
Passivity based control of DC motor in sensorless configuration is proposed in this paper. Exact tracking error dynamics passive output feedback control is used for stabilizing the speed of Buck converter fed DC motor under various load torques such as constant type, fan type, propeller type, and unknown load torques. Under load conditions, sensorless online algebraic approach is proposed, and it is compared with sensorless reduced order observer approach. The former produces better response in estimating the load torque. Sensitivity analysis is also performed to select the appropriate control variables. Simulation and experimental results fully confirm the superiority of the proposed approach suggested in this paper. PMID:25893208
An improved advertising CTR prediction approach based on the fuzzy deep neural network.
Jiang, Zilong; Gao, Shu; Li, Mingjiang
2018-01-01
Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise.
Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice
2016-01-01
The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, H.
2018-06-01
This paper concerns the β-phase depletion kinetics of a thermally sprayed free-standing CoNiCrAlY (Co-31.7 pct Ni-20.8 pct Cr-8.1 pct Al-0.5 pct Y, all in wt pct) coating alloy. An analytical β-phase depletion model based on the precipitate free zone growth kinetics was developed to calculate the β-phase depletion kinetics during isothermal oxidation. This approach, which accounts for the molar volume of the alloy, the interfacial energy of the γ/ β interface, and the Al concentration at γ/ γ + β boundary, requires the Al concentrations in the β-phase depletion zone as the input rather than the oxidation kinetics at the oxide/coating interface. The calculated β-phase depletion zones derived from the current model were compared with experimental results. It is shown that the calculated β-phase depletion zones using the current model are in reasonable agreement with those obtained experimentally. The constant compositional terms used in the model are likely to cause the discrepancies between the model predictions and experimental results. This analytical approach, which shows a reasonable correlation with experimental results, demonstrates a good reliability in the fast evaluation on lifetime prediction of MCrAlY coatings.
NASA Astrophysics Data System (ADS)
Chen, H.
2018-03-01
This paper concerns the β-phase depletion kinetics of a thermally sprayed free-standing CoNiCrAlY (Co-31.7 pct Ni-20.8 pct Cr-8.1 pct Al-0.5 pct Y, all in wt pct) coating alloy. An analytical β-phase depletion model based on the precipitate free zone growth kinetics was developed to calculate the β-phase depletion kinetics during isothermal oxidation. This approach, which accounts for the molar volume of the alloy, the interfacial energy of the γ/β interface, and the Al concentration at γ/γ + β boundary, requires the Al concentrations in the β-phase depletion zone as the input rather than the oxidation kinetics at the oxide/coating interface. The calculated β-phase depletion zones derived from the current model were compared with experimental results. It is shown that the calculated β-phase depletion zones using the current model are in reasonable agreement with those obtained experimentally. The constant compositional terms used in the model are likely to cause the discrepancies between the model predictions and experimental results. This analytical approach, which shows a reasonable correlation with experimental results, demonstrates a good reliability in the fast evaluation on lifetime prediction of MCrAlY coatings.
Analytical and experimental study of sleeper SAT S 312 in slab track Sateba system
NASA Astrophysics Data System (ADS)
Guigou-Carter, C.; Villot, M.; Guillerme, B.; Petit, C.
2006-06-01
In this paper, a simple prediction tool based on a two-dimensional model is developed for a slab track system composed of two rails with rail pads, sleepers with sleeper pads, and a concrete base slab. The track and the slab are considered as infinite beams with bending stiffness, loss factor and mass per unit length. The track system is represented by its impedance per unit length of track and the ground by its line input impedance calculated using a two-dimensional elastic half-space ground model based on the wave approach. Damping of each track component is modelled as hysteretic damping and is taken into account by using a complex stiffness. The unsprung mass of the vehicle is considered as a concentrated mass at the excitation point on the rail head. The effect of the dynamic stiffness of the sleeper pads on the vibration isolation is studied in detail, the vibration isolation provided by the track system being quantified by an insertion gain in dB per one-third octave band. The second part of this paper presents an experimental test rig used to measure the dynamic stiffness of the sleeper pads on a full width section of the track (two rails). The experimental set-up is submitted to vertical as well as horizontal static loads (via hydraulic jacks) and an electrodynamic shaker is used for dynamic excitation of the system. The determination of the dynamic stiffness of the sleeper pads is based on the approach called the "direct method". The limitations of the experimental set-up are discussed. The measurement results for one type of sleeper pad are presented.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
A Novel Approach for Creating Activity-Aware Applications in a Hospital Environment
NASA Astrophysics Data System (ADS)
Bardram, Jakob E.
Context-aware and activity-aware computing has been proposed as a way to adapt the computer to the user’s ongoing activity. However, deductively moving from physical context - like location - to establishing human activity has proved difficult. This paper proposes a novel approach to activity-aware computing. Instead of inferring activities, this approach enables the user to explicitly model their activity, and then use sensor-based events to create, manage, and use these computational activities adjusted to a specific context. This approach was crafted through a user-centered design process in collaboration with a hospital department. We propose three strategies for activity-awareness: context-based activity matching, context-based activity creation, and context-based activity adaptation. We present the implementation of these strategies and present an experimental evaluation of them. The experiments demonstrate that rather than considering context as information, context can be a relational property that links ’real-world activities’ with their ’computational activities’.
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Macy, Jonathan T; Chassin, Laurie; Presson, Clark C; Sherman, Jeffrey W
2015-02-01
Implicit attitudes have been shown to predict smoking behaviors. Therefore, an important goal is the development of interventions to change these attitudes. This study assessed the effects of a web-based intervention on implicit attitudes toward smoking and receptivity to smoking-related information. Smokers (N = 284) were recruited to a two-session web-based study. In the first session, baseline data were collected. Session two contained the intervention, which consisted of assignment to the experimental or control version of an approach-avoidance task and assignment to an anti-smoking or control public service announcement (PSA), and post-intervention measures. Among smokers with less education and with plans to quit, implicit attitudes were more negative for those who completed the approach-avoidance task. Smokers with more education who viewed the anti-smoking PSA and completed the approach-avoidance task spent more time reading smoking-related information. An approach-avoidance task is a potentially feasible strategy for changing implicit attitudes toward smoking and increasing receptivity to smoking-related information.
Macy, Jonathan T.; Chassin, Laurie; Presson, Clark C.; Sherman, Jeffrey W.
2014-01-01
Implicit attitudes have been shown to predict smoking behaviors. Therefore, an important goal is the development of interventions to change these attitudes. This study assessed the effects of a web-based intervention on implicit attitudes toward smoking and receptivity to smoking-related information. Smokers (N=284) were recruited to a two-session web-based study. In the first session, baseline data were collected. Session two contained the intervention, which consisted of assignment to the experimental or control version of an approach-avoidance task and assignment to an anti-smoking or control public service announcement (PSA), and post-intervention measures. Among smokers with less education and with plans to quit, implicit attitudes were more negative for those who completed the approach-avoidance task. Smokers with more education who viewed the anti-smoking PSA and completed the approach-avoidance task spent more time reading smoking-related information. An approach-avoidance task is a potentially feasible strategy for changing implicit attitudes toward smoking and increasing receptivity to smoking-related information. PMID:25059750
Hierarchical virtual screening approaches in small molecule drug discovery.
Kumar, Ashutosh; Zhang, Kam Y J
2015-01-01
Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrido, J. M.; Algaba, J.; Blas, F. J., E-mail: felipe@uhu.es
2016-04-14
We have determined the interfacial properties of tetrahydrofuran (THF) from direct simulation of the vapor-liquid interface. The molecules are modeled using six different molecular models, three of them based on the united-atom approach and the other three based on a coarse-grained (CG) approach. In the first case, THF is modeled using the transferable parameters potential functions approach proposed by Chandrasekhar and Jorgensen [J. Chem. Phys. 77, 5073 (1982)] and a new parametrization of the TraPPE force fields for cyclic alkanes and ethers [S. J. Keasler et al., J. Phys. Chem. B 115, 11234 (2012)]. In both cases, dispersive and coulombicmore » intermolecular interactions are explicitly taken into account. In the second case, THF is modeled as a single sphere, a diatomic molecule, and a ring formed from three Mie monomers according to the SAFT-γ Mie top-down approach [V. Papaioannou et al., J. Chem. Phys. 140, 054107 (2014)]. Simulations were performed in the molecular dynamics canonical ensemble and the vapor-liquid surface tension is evaluated from the normal and tangential components of the pressure tensor along the simulation box. In addition to the surface tension, we have also obtained density profiles, coexistence densities, critical temperature, density, and pressure, and interfacial thickness as functions of temperature, paying special attention to the comparison between the estimations obtained from different models and literature experimental data. The simulation results obtained from the three CG models as described by the SAFT-γ Mie approach are able to predict accurately the vapor-liquid phase envelope of THF, in excellent agreement with estimations obtained from TraPPE model and experimental data in the whole range of coexistence. However, Chandrasekhar and Jorgensen model presents significant deviations from experimental results. We also compare the predictions for surface tension as obtained from simulation results for all the models with experimental data. The three CG models predict reasonably well (but only qualitatively) the surface tension of THF, as a function of temperature, from the triple point to the critical temperature. On the other hand, only the TraPPE united-atoms models are able to predict accurately the experimental surface tension of the system in the whole temperature range.« less
Broken symmetry dielectric resonators for high quality factor Fano metasurfaces
Campione, Salvatore; Liu, Sheng; Basilio, Lorena I.; ...
2016-10-25
We present a new approach to dielectric metasurface design that relies on a single resonator per unit cell and produces robust, high quality factor Fano resonances. Our approach utilizes symmetry breaking of highly symmetric resonator geometries, such as cubes, to induce couplings between the otherwise orthogonal resonator modes. In particular, we design perturbations that couple “bright” dipole modes to “dark” dipole modes whose radiative decay is suppressed by local field effects in the array. Our approach is widely scalable from the near-infrared to radio frequencies. We first unravel the Fano resonance behavior through numerical simulations of a germanium resonator-based metasurfacemore » that achieves a quality factor of ~1300 at ~10.8 μm. Then, we present two experimental demonstrations operating in the near-infrared (~1 μm): a silicon-based implementation that achieves a quality factor of ~350; and a gallium arsenide-based structure that achieves a quality factor of ~600, the highest near-infrared quality factor experimentally demonstrated to date with this kind of metasurface. Importantly, large electromagnetic field enhancements appear within the resonators at the Fano resonant frequencies. Here, we envision that combining high quality factor, high field enhancement resonances with nonlinear and active/gain materials such as gallium arsenide will lead to new classes of active optical devices.« less
Broken symmetry dielectric resonators for high quality factor Fano metasurfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Liu, Sheng; Basilio, Lorena I.
We present a new approach to dielectric metasurface design that relies on a single resonator per unit cell and produces robust, high quality factor Fano resonances. Our approach utilizes symmetry breaking of highly symmetric resonator geometries, such as cubes, to induce couplings between the otherwise orthogonal resonator modes. In particular, we design perturbations that couple “bright” dipole modes to “dark” dipole modes whose radiative decay is suppressed by local field effects in the array. Our approach is widely scalable from the near-infrared to radio frequencies. We first unravel the Fano resonance behavior through numerical simulations of a germanium resonator-based metasurfacemore » that achieves a quality factor of ~1300 at ~10.8 μm. Then, we present two experimental demonstrations operating in the near-infrared (~1 μm): a silicon-based implementation that achieves a quality factor of ~350; and a gallium arsenide-based structure that achieves a quality factor of ~600, the highest near-infrared quality factor experimentally demonstrated to date with this kind of metasurface. Importantly, large electromagnetic field enhancements appear within the resonators at the Fano resonant frequencies. Here, we envision that combining high quality factor, high field enhancement resonances with nonlinear and active/gain materials such as gallium arsenide will lead to new classes of active optical devices.« less
Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths
NASA Astrophysics Data System (ADS)
Drotleff, K.; Liewald, M.
2017-09-01
Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.
Constructing petal modes from the coherent superposition of Laguerre-Gaussian modes
NASA Astrophysics Data System (ADS)
Naidoo, Darryl; Forbes, Andrew; Ait-Ameur, Kamel; Brunel, Marc
2011-03-01
An experimental approach in generating Petal-like transverse modes, which are similar to what is seen in porro-prism resonators, has been successfully demonstrated. We hypothesize that the petal-like structures are generated from a coherent superposition of Laguerre-Gaussian modes of zero radial order and opposite azimuthal order. To verify this hypothesis, visually based comparisons such as petal peak to peak diameter and the angle between adjacent petals are drawn between experimental data and simulated data. The beam quality factor of the Petal-like transverse modes and an inner product interaction is also experimentally compared to numerical results.
Yang, Seung-Cheol; Qian, Xiaoping
2013-09-17
A systematic approach to manipulating flexible carbon nanotubes (CNTs) has been developed on the basis of atomic force microscope (AFM) based pushing. Pushing CNTs enables efficient transport and precise location of individual CNTs. A key issue for pushing CNTs is preventing defective distortion in repetitive bending and unbending deformation. The approach presented here controls lateral movement of an AFM tip to bend CNTs without permanent distortion. The approach investigates possible defects caused by tensile strain of the outer tube under uniform bending and radial distortion by kinking. Using the continuum beam model and experimental bending tests, dependency of maximum bending strain on the length of bent CNTs and radial distortion on bending angles at a bent point have been demonstrated. Individual CNTs are manipulated by limiting the length of bent CNTs and the bending angle. In our approach, multiwalled CNTs with 5-15 nm diameter subjected to bending deformation produce no outer tube breakage under uniform bending and reversible radial deformation with bending angles less than 110°. The lateral tip movement is determined by a simple geometric model that relies on the shape of multiwalled CNTs. The model effectively controls deforming CNT length and bending angle for given CNT shape. Experimental results demonstrate successful manipulation of randomly dispersed CNTs without visual defects. This approach to pushing can be extended to develop a wide range of CNT based nanodevice applications.
A review of covariate selection for non-experimental comparative effectiveness research.
Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler
2013-11-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.