NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
NASA Astrophysics Data System (ADS)
Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik
2017-08-01
Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Gooding, Owen W
2004-06-01
The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.
Resolution of an Orbital Issue: A Designed Experiment
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.
2011-01-01
Design of Experiments (DOE) is a systematic approach to investigation of a system or process. A series of structured tests are designed in which planned changes are made to the input variables of a process or system. The effects of these changes on a pre-defined output are then assessed. DOE is a formal method of maximizing information gained while minimizing resources required.
Dialectical Inquiry--Does It Deliver? A User Based Research Experience
ERIC Educational Resources Information Center
Seligman, James
2013-01-01
Dialectical Enquiry (DI) as a research method was used in the study of customer/student experience and its management (CEM) in not for profit as higher education. The (DI) method is applied to senders, receivers of the customer experience across six English universities to gather real world data using an imposed dialectical structure and analysis.…
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Does Active Learning Improve Students' Knowledge of and Attitudes toward Research Methods?
ERIC Educational Resources Information Center
Campisi, Jay; Finn, Kevin E.
2011-01-01
We incorporated an active, collaborative-based research project in our undergraduate Research Methods course for first-year sports medicine majors. Working in small groups, students identified a research question, generated a hypothesis to be tested, designed an experiment, implemented the experiment, analyzed the data, and presented their…
Nuclear Data Activities in Support of the DOE Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Westfall, R. M.; McKnight, R. D.
2005-05-01
The DOE Nuclear Criticality Safety Program (NCSP) provides the technical infrastructure maintenance for those technologies applied in the evaluation and performance of safe fissionable-material operations in the DOE complex. These technologies include an Analytical Methods element for neutron transport as well as the development of sensitivity/uncertainty methods, the performance of Critical Experiments, evaluation and qualification of experiments as Benchmarks, and a comprehensive Nuclear Data program coordinated by the NCSP Nuclear Data Advisory Group (NDAG). The NDAG gathers and evaluates differential and integral nuclear data, identifies deficiencies, and recommends priorities on meeting DOE criticality safety needs to the NCSP Criticality Safety Support Group (CSSG). Then the NDAG identifies the required resources and unique capabilities for meeting these needs, not only for performing measurements but also for data evaluation with nuclear model codes as well as for data processing for criticality safety applications. The NDAG coordinates effort with the leadership of the National Nuclear Data Center, the Cross Section Evaluation Working Group (CSEWG), and the Working Party on International Evaluation Cooperation (WPEC) of the OECD/NEA Nuclear Science Committee. The overall objective is to expedite the issuance of new data and methods to the DOE criticality safety user. This paper describes these activities in detail, with examples based upon special studies being performed in support of criticality safety for a variety of DOE operations.
Optimizing Mass Spectrometry Analyses: A Tailored Review on the Utility of Design of Experiments.
Hecht, Elizabeth S; Oberg, Ann L; Muddiman, David C
2016-05-01
Mass spectrometry (MS) has emerged as a tool that can analyze nearly all classes of molecules, with its scope rapidly expanding in the areas of post-translational modifications, MS instrumentation, and many others. Yet integration of novel analyte preparatory and purification methods with existing or novel mass spectrometers can introduce new challenges for MS sensitivity. The mechanisms that govern detection by MS are particularly complex and interdependent, including ionization efficiency, ion suppression, and transmission. Performance of both off-line and MS methods can be optimized separately or, when appropriate, simultaneously through statistical designs, broadly referred to as "design of experiments" (DOE). The following review provides a tutorial-like guide into the selection of DOE for MS experiments, the practices for modeling and optimization of response variables, and the available software tools that support DOE implementation in any laboratory. This review comes 3 years after the latest DOE review (Hibbert DB, 2012), which provided a comprehensive overview on the types of designs available and their statistical construction. Since that time, new classes of DOE, such as the definitive screening design, have emerged and new calls have been made for mass spectrometrists to adopt the practice. Rather than exhaustively cover all possible designs, we have highlighted the three most practical DOE classes available to mass spectrometrists. This review further differentiates itself by providing expert recommendations for experimental setup and defining DOE entirely in the context of three case-studies that highlight the utility of different designs to achieve different goals. A step-by-step tutorial is also provided.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh
2009-02-20
Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method development time.
Determination of Copper and Zinc in Brass: Two Basic Methods
ERIC Educational Resources Information Center
Fabre, Paul-Louis; Reynes, Olivier
2010-01-01
In this experiment, the concentrations of copper and zinc in brass are obtained by two methods. This experiment does not require advanced instrumentation, uses inexpensive chemicals, and can be easily carried out during a 3-h upper-level undergraduate laboratory. Pedagogically, the basic concepts of analytical chemistry in solutions, such as pH,…
PEOR--Engaging Students in Demonstrations
ERIC Educational Resources Information Center
Bonello, Charles; Scaife, Jon
2009-01-01
Demonstrations are a core part of science teaching. In 1980 a three-part assessment method using demonstrating was proposed. Known as DOE this consisted of demonstration, observation and explanation. DOE quickly evolved into POE: predict, observe, explain. In the light of experiences with POE and insights from constructivist theory we set out in…
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng
2018-03-01
Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.
Does Enjoyment Accompany Learning? A Student Perceptions Inquiry.
ERIC Educational Resources Information Center
Blai, Boris, Jr.
1979-01-01
Discusses a study conducted at Harcum Junior College, a private, two-year, women's college, to elicit students' perceptions of a variety of learning experiences/teaching methods and of their relative enjoyment levels with regard to these experiences. Includes the questionnaire. (AYC)
NASA Astrophysics Data System (ADS)
Furbish, Dean Russel
The purpose of this study is to examine pragmatist constructivism as a science education referent for adult learners. Specifically, this study seeks to determine whether George Herbert Mead's doctrine, which conflates pragmatist learning theory and philosophy of natural science, might facilitate (a) scientific concept acquisition, (b) learning scientific methods, and (c) preparation of learners for careers in science and science-related areas. A philosophical examination of Mead's doctrine in light of these three criteria has determined that pragmatist constructivism is not a viable science education referent for adult learners. Mead's pragmatist constructivism does not portray scientific knowledge or scientific methods as they are understood by practicing scientists themselves, that is, according to scientific realism. Thus, employment of pragmatist constructivism does not adequately prepare future practitioners for careers in science-related areas. Mead's metaphysics does not allow him to commit to the existence of the unobservable objects of science such as molecular cellulose or mosquito-borne malarial parasites. Mead's anti-realist metaphysics also affects his conception of scientific methods. Because Mead does not commit existentially to the unobservable objects of realist science, Mead's science does not seek to determine what causal role if any the hypothetical objects that scientists routinely posit while theorizing might play in observable phenomena. Instead, constructivist pragmatism promotes subjective epistemology and instrumental methods. The implication for learning science is that students are encouraged to derive scientific concepts based on a combination of personal experience and personal meaningfulness. Contrary to pragmatist constructivism, however, scientific concepts do not arise inductively from subjective experience driven by personal interests. The broader implication of this study for adult education is that the philosophically laden claims of constructivist learning theories need to be identified and assessed independently of any empirical support that these learning theories might enjoy. This in turn calls for educational experiences for graduate students of education that incorporate philosophical understanding such that future educators might be able to recognize and weigh the philosophically laden claims of adult learning theories.
Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel
NASA Technical Reports Server (NTRS)
Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.
2014-01-01
Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems
NASA Astrophysics Data System (ADS)
Chan, Tony; Szeto, Tedd
1994-03-01
We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
A practical approach to automate randomized design of experiments for ligand-binding assays.
Tsoi, Jennifer; Patel, Vimal; Shih, Judy
2014-03-01
Design of experiments (DOE) is utilized in optimizing ligand-binding assay by modeling factor effects. To reduce the analyst's workload and error inherent with DOE, we propose the integration of automated liquid handlers to perform the randomized designs. A randomized design created from statistical software was imported into custom macro converting the design into a liquid-handler worklist to automate reagent delivery. An optimized assay was transferred to a contract research organization resulting in a successful validation. We developed a practical solution for assay optimization by integrating DOE and automation to increase assay robustness and enable successful method transfer. The flexibility of this process allows it to be applied to a variety of assay designs.
Does Mixed Methods Research Matter to Understanding Childhood Well-Being?
ERIC Educational Resources Information Center
Jones, Nicola; Sumner, Andy
2009-01-01
There has been a rich debate in development studies on combining research methods in recent years. We explore the particular challenges and opportunities surrounding mixed methods approaches to childhood well-being. We argue that there are additional layers of complexity due to the distinctiveness of children's experiences of deprivation or…
Optimizing Mass Spectrometry Analyses: A Tailored Review on the Utility of Design of Experiments
Hecht, Elizabeth S.; Oberg, Ann L.; Muddiman, David
2016-01-01
SUMMARY Mass spectrometry (MS) has emerged as a tool that can analyze nearly all classes of molecules, with its scope rapidly expanding in the areas of post-translational modifications, MS instrumentation, and many others. Yet integration of novel analyte preparatory and purification methods with existing or novel mass spectrometers can introduce new challenges for MS sensitivity. The mechanisms that govern detection by MS are particularly complex and interdependent, including ionization efficiency, ion suppression, and transmission. Performance of both off-line and MS methods can be optimized separately or, when appropriate, simultaneously through statistical designs, broadly referred to as “design of experiments” (DOE). The following review provides a tutorial-like guide into the selection of DOE for MS experiments, the practices for modeling and optimization of response variables, and the available software tools that support DOE implementation in any laboratory. This review comes three years after the latest DOE review (Hibbert DB 2012), which provided a comprehensive overview on the types of designs available and their statistical construction. Since that time, new classes of DOE, such as the definitive screening design, have emerged and new calls have been made for mass spectrometrists to adopt the practice. Rather than exhaustively cover all possible designs, we have highlighted the three most practical DOE classes available to mass spectrometrists. This review further differentiates itself by providing expert recommendations for experimental setup and defining DOE entirely in the context of three case-studies that highlight the utility of different designs to achieve different goals. A step-by-step tutorial is also provided. PMID:26951559
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
Modeling experimental plasma diagnostics in the FLASH code: Thomson scattering
NASA Astrophysics Data System (ADS)
Weide, Klaus; Flocke, Norbert; Feister, Scott; Tzeferacos, Petros; Lamb, Donald
2017-10-01
Spectral analysis of the Thomson scattering of laser light sent into a plasma provides an experimental method to quantify plasma properties in laser-driven plasma experiments. We have implemented such a synthetic Thomson scattering diagnostic unit in the FLASH code, to emulate the probe-laser propagation, scattering and spectral detection. User-defined laser rays propagate into the FLASH simulation region and experience scattering (change in direction and frequency) based on plasma parameters. After scattering, the rays propagate out of the interaction region and are spectrally characterized. The diagnostic unit can be used either during a physics simulation or in post-processing of simulation results. FLASH is publicly available at flash.uchicago.edu. U.S. DOE NNSA, U.S. DOE NNSA ASC, U.S. DOE Office of Science and NSF.
A Versatile and Inexpensive Enzyme Purification Experiment for Undergraduate Biochemistry Labs.
ERIC Educational Resources Information Center
Farrell, Shawn O.; Choo, Darryl
1989-01-01
Develops an experiment that could be done in two- to three-hour blocks and does not rely on cold room procedures for most of the purification. Describes the materials, methods, and results of the purification of bovine heart lactate dehydrogenase using ammonium sulfate fractionation, dialysis, and separation using affinity chromatography and…
Székely, Gy; Henriques, B; Gil, M; Ramos, A; Alvarez, C
2012-11-01
The present study reports on a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development strategy supported by design of experiments (DoE) for the trace analysis of 4-dimethylaminopyridine (DMAP). The conventional approaches for development of LC-MS/MS methods are usually via trial and error, varying intentionally the experimental factors which is time consuming and interactions between experimental factors are not considered. The LC factors chosen for the DoE study include flow (F), gradient (G) and injection volume (V(inj)) while cone voltage (E(con)) and collision energy (E(col)) were chosen as MS parameters. All of the five factors were studied simultaneously. The method was optimized with respect to four responses: separation of peaks (Sep), peak area (A(peak)), length of the analysis (T) and the signal to noise ratio (S/N). A quadratic model, namely central composite face (CCF) featuring 29 runs was used instead of a less powerful linear model since the increase in the number of injections was insignificant. In order to determine the robustness of the method a new set of DoE experiments was carried out applying robustness around the optimal conditions was evaluated applying a fractional factorial of resolution III with 11 runs, wherein additional factors - such as column temperature and quadrupole resolution - were considered. The method utilizes a Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10min runtime. Drawbacks of derivatization, namely incomplete reaction and time consuming sample preparation, have been avoided and the change from SIM to MRM mode resulted in increased sensitivity and lower LOQ. The DoE method development strategy led to a method allowing the trace analysis of DMAP at 0.5 ng/ml absolute concentration which corresponds to a 0.1 ppm limit of quantification in 5mg/ml mometasone furoate glucocorticoid. The obtained method was validated in a linear range of 0.1-10 ppm and presented a %RSD of 0.02% for system precision. Regarding DMAP recovery in mometasone furoate, spiked samples produced %recoveries between 83 and 113% in the range of 0.1-2 ppm. Copyright © 2012 Elsevier B.V. All rights reserved.
Evaluating an Intelligent Tutoring System for Design Patterns: The DEPTHS Experience
ERIC Educational Resources Information Center
Jeremic, Zoran; Jovanovic, Jelena; Gasevic, Dragan
2009-01-01
The evaluation of intelligent tutoring systems (ITSs) is an important though often neglected stage of ITS development. There are many evaluation methods available but literature does not provide clear guidelines for the selection of evaluation method(s) to be used in a particular context. This paper describes the evaluation study of DEPTHS, an…
The U. S. Department of Energy SARP review training program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauck, C.J.
1988-01-01
In support of its radioactive material packaging certification program, the U.S. Department of Energy (DOE) has established a special training workshop. The purpose of the two-week workshop is to develop skills in reviewing Safety Analysis Reports for Packagings (SARPs) and performing confirmatory analyses. The workshop, conducted by the Lawrence Livermore National Laboratory (LLNL) for DOE, is divided into two parts: methods of review and methods of analysis. The sessions covering methods of review are based on the DOE document, ''Packaging Review Guide for Reviewing Safety Analysis Reports for Packagings'' (PRG). The sessions cover relevant DOE Orders and all areas ofmore » review in the applicable Nuclear Regulatory Commission (NRC) Regulatory Guides. The technical areas addressed include structural and thermal behavior, materials, shielding, criticality, and containment. The course sessions on methods of analysis provide hands-on experience in the use of calculational methods and codes for reviewing SARPs. Analytical techniques and computer codes are discussed and sample problems are worked. Homework is assigned each night and over the included weekend; at the conclusion, a comprehensive take-home examination is given requiring six to ten hours to complete.« less
78 FR 5421 - Proposed Information Collection; Comment Request; NOAA's Teacher at Sea Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... solicits information from interested educators: basic personal information, teaching experience and ideas... does not collect information from this universe of respondents for any other purpose. II. Method of...
ERIC Educational Resources Information Center
Frederiksen, Heidi
2010-01-01
The purpose of this study was to determine whether there was a significant difference between the perceived dispositions in pre-service teachers in urban settings versus non-urban settings. It was also the intent of this study to describe the change in perceived dispositions throughout pre-service teachers' internship experiences. Graduate…
Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer
ERIC Educational Resources Information Center
Jenkins, Kristin P.; Bielec, Barbara
2006-01-01
Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…
Analysis on Flexural Strength of A36 Mild Steel by Design of Experiment (DOE)
NASA Astrophysics Data System (ADS)
Nurulhuda, A.; Hafizzal, Y.; Izzuddin, MZM; Sulawati, MRN; Rafidah, A.; Suhaila, Y.; Fauziah, AR
2017-08-01
Nowadays demand for high quality and reliable components and materials are increasing so flexural tests have become vital test method in both the research and manufacturing process and development to explain in details about the material’s ability to withstand deformation under load. Recently, there are lack research studies on the effect of thickness, welding type and joint design on the flexural condition by DOE approach method. Therefore, this research will come out with the flexural strength of mild steel since it is not well documented. By using Design of Experiment (DOE), a full factorial design with two replications has been used to study the effects of important parameters which are welding type, thickness and joint design. The measurement of output response is identified as flexural strength value. Randomize experiments was conducted based on table generated via Minitab software. A normal probability test was carried out using Anderson Darling Test and show that the P-value is <0.005. Thus, the data is not normal since there is significance different between the actual data with the ideal data. Referring to the ANOVA, only factor joint design is significant since the P-value is less than 0.05. From the main plot and interaction plot, the recommended setting for each of parameters were suggested as high level for welding type, high level for thickness and low level for joint design. The prediction model was developed thru regression in order to measure effect of output response for any changes on parameters setting. In the future, the experiments can be enhanced using Taguchi methods in order to do verification of result.
NASA Astrophysics Data System (ADS)
Babanova, Sofia; Artyushkova, Kateryna; Ulyanova, Yevgenia; Singhal, Sameer; Atanassov, Plamen
2014-01-01
Two statistical methods, design of experiments (DOE) and principal component analysis (PCA) are employed to investigate and improve performance of air-breathing gas-diffusional enzymatic electrodes. DOE is utilized as a tool for systematic organization and evaluation of various factors affecting the performance of the composite system. Based on the results from the DOE, an improved cathode is constructed. The current density generated utilizing the improved cathode (755 ± 39 μA cm-2 at 0.3 V vs. Ag/AgCl) is 2-5 times higher than the highest current density previously achieved. Three major factors contributing to the cathode performance are identified: the amount of enzyme, the volume of phosphate buffer used to immobilize the enzyme, and the thickness of the gas-diffusion layer (GDL). PCA is applied as an independent confirmation tool to support conclusions made by DOE and to visualize the contribution of factors in individual cathode configurations.
Speckle noise suppression method in holographic display using time multiplexing
NASA Astrophysics Data System (ADS)
Liu, Su-Juan; Wang, Di; Li, Song-Jie; Wang, Qiong-Hua
2017-06-01
We propose a method to suppress the speckle noise in holographic display using time multiplexing. The diffractive optical elements (DOEs) and the subcomputer-generated holograms (sub-CGHs) are generated, respectively. The final image is reconstructed using time multiplexing of the subimages and the final subimages. Meanwhile, the speckle noise of the final image is suppressed by reducing the coherence of the reconstructed light and separating the adjacent image points in space. Compared with the pixel separation method, the experiments demonstrate that the proposed method suppresses the speckle noise effectively with less calculation burden and lower demand for frame rate of the spatial light modulator. In addition, with increases of the DOEs and the sub-CGHs, the speckle noise is further suppressed.
ERIC Educational Resources Information Center
Bladt, Don; Murray, Steve; Gitch, Brittany; Trout, Haylee; Liberko, Charles
2011-01-01
This undergraduate organic laboratory exercise involves the sulfuric acid-catalyzed conversion of waste vegetable oil into biodiesel. The acid-catalyzed method, although inherently slower than the base-catalyzed methods, does not suffer from the loss of product or the creation of emulsion producing soap that plagues the base-catalyzed methods when…
Yamashita, Hitoyoshi; Morita, Masamune; Sugiura, Haruka; Fujiwara, Kei; Onoe, Hiroaki; Takinoue, Masahiro
2015-04-01
We report an easy-to-use generation method of biologically compatible monodisperse water-in-oil microdroplets using a glass-capillary-based microfluidic device in a tabletop mini-centrifuge. This device does not require complicated microfabrication; furthermore, only a small sample volume is required in experiments. Therefore, we believe that this method will assist biochemical and cell-biological experiments. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Beyond the Rainbow: Retrieval Practice Leads to Better Spelling than Does Rainbow Writing
ERIC Educational Resources Information Center
Jones, Angela C.; Wardlow, Liane; Pan, Steven C.; Zepeda, Cristina; Heyman, Gail D.; Dunlosky, John; Rickard, Timothy C.
2016-01-01
In three experiments, we compared the effectiveness of rainbow writing and retrieval practice, two common methods of spelling instruction. In experiment 1 (n = 14), second graders completed 2 days of spelling practice, followed by spelling tests 1 day and 5 weeks later. A repeated measures analysis of variance demonstrated that spelling accuracy…
Does the Nature of the Experience Influence Suggestibility? A Study of Children's Event Memory.
ERIC Educational Resources Information Center
Gobbo, Camilla; Mega, Carolina; Pipe, Margaret-Ellen
2002-01-01
Two experiments examined effects of event modality on young children's memory and suggestibility. Findings indicated that 5-year-olds were more accurate than 3-year-olds and those participating in the event were more accurate than those either observing or listening to a narrative. Assessment method, level of event learning, delay to testing, and…
Case Studies for the Statistical Design of Experiments Applied to Powered Rotor Wind Tunnel Tests
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.; Tanner, Philip E.; Martin, Preston B.; Commo, Sean A.
2015-01-01
The application of statistical Design of Experiments (DOE) to helicopter wind tunnel testing was explored during two powered rotor wind tunnel entries during the summers of 2012 and 2013. These tests were performed jointly by the U.S. Army Aviation Development Directorate Joint Research Program Office and NASA Rotary Wing Project Office, currently the Revolutionary Vertical Lift Project, at NASA Langley Research Center located in Hampton, Virginia. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small portion of the overall tests devoted to developing case studies of the DOE approach as it applies to powered rotor testing. A 16-47 times reduction in the number of data points required was estimated by comparing the DOE approach to conventional testing methods. The average error for the DOE surface response model for the OH-58F test was 0.95 percent and 4.06 percent for drag and download, respectively. The DOE surface response model of the Active Flow Control test captured the drag within 4.1 percent of measured data. The operational differences between the two testing approaches are identified, but did not prevent the safe operation of the powered rotor model throughout the DOE test matrices.
Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD)
Rhoades, Seth D.
2017-01-01
Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). Methods We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. Results LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method. PMID:28348510
Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm
2014-11-01
experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The
Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD).
Rhoades, Seth D; Weljie, Aalim M
2016-12-01
Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.
Mark E. Kubiske; Anita R. Foss; Andrew J. Burton; Wendy S. Jones; Keith F. Lewin; John Nagy; Kurt S. Pregitzer; Donald R. Zak; David F. Karnosky
2015-01-01
This publication is an additional source of metadata for data stored and publicly available in the U.S. Department of Agriculture, Forest Service Research Data Archive. Here, we document the development, design, management, and operation of the experiment. In 1998, a team of scientists from the U.S. Forest Service, Department of Energy (DOE), Michigan Technological...
Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B
2017-03-01
The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
Helicopter rotor loads using a matched asymptotic expansion technique
NASA Technical Reports Server (NTRS)
Pierce, G. A.; Vaidyanathan, A. R.
1981-01-01
The theoretical basis and computational feasibility of the Van Holten method, and its performance and range of validity by comparison with experiment and other approximate methods was examined. It is found that within the restrictions of incompressible, potential flow and the assumption of small disturbances, the method does lead to a valid description of the flow. However, the method begins to break down under conditions favoring nonlinear effects such as wake distortion and blade/rotor interaction.
ERIC Educational Resources Information Center
Anderson, Karen; May, Frances A.
2010-01-01
The researchers, a librarian and a faculty member, collaborated to investigate the effectiveness of delivery methods in information literacy instruction. The authors conducted a field experiment to explore how face-to-face, online, and blended learning instructional formats influenced students' retention of information literacy skills. Results are…
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Zhang, Fang; Zhu, Jing; Song, Qiang; Yue, Weirui; Liu, Jingdan; Wang, Jian; Situ, Guohai; Huang, Huijie
2015-10-20
In general, Fourier transform lenses are considered as ideal in the design algorithms of diffractive optical elements (DOEs). However, the inherent aberrations of a real Fourier transform lens disturb the far field pattern. The difference between the generated pattern and the expected design will impact the system performance. Therefore, a method for modifying the Fourier spectrum of DOEs without introducing other optical elements to reduce the aberration effect of the Fourier transform lens is proposed. By applying this method, beam shaping performance is improved markedly for the optical system with a real Fourier transform lens. The experiments carried out with a commercial Fourier transform lens give evidence for this method. The method is capable of reducing the system complexity as well as improving its performance.
Segura-Totten, Miriam; Dalman, Nancy E.
2013-01-01
Analysis of the primary literature in the undergraduate curriculum is associated with gains in student learning. In particular, the CREATE (Consider, Read, Elucidate hypotheses, Analyze and interpret the data, and Think of the next Experiment) method is associated with an increase in student critical thinking skills. We adapted the CREATE method within a required cell biology class and compared the learning gains of students using CREATE to those of students involved in less structured literature discussions. We found that while both sets of students had gains in critical thinking, students who used the CREATE method did not show significant improvement over students engaged in a more traditional method for dissecting the literature. Students also reported similar learning gains for both literature discussion methods. Our study suggests that, at least in our educational context, the CREATE method does not lead to higher learning gains than a less structured way of reading primary literature. PMID:24358379
The RATIO method for time-resolved Laue crystallography
Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hasan, Iftekhar; Sozer, Yilmaz
This paper presents the design considerations in cogging torque minimization in two types of transverse flux machines. The machines have a double stator-single rotor configuration with flux concentrating ferrite magnets. One of the machines has pole windings across each leg of an E-Core stator. Another machine has quasi-U-shaped stator cores and a ring winding. The flux in the stator back iron is transverse in both machines. Different methods of cogging torque minimization are investigated. Key methods of cogging torque minimization are identified and used as design variables for optimization using a design of experiments (DOE) based on the Taguchi method.more » A three-level DOE is performed to reach an optimum solution with minimum simulations. Finite element analysis is used to study the different effects. Two prototypes are being fabricated for experimental verification.« less
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.
A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less
Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.
2017-01-13
A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
The DoE method as an efficient tool for modeling the behavior of monocrystalline Si-PV module
NASA Astrophysics Data System (ADS)
Kessaissia, Fatma Zohra; Zegaoui, Abdallah; Boutoubat, Mohamed; Allouache, Hadj; Aillerie, Michel; Charles, Jean-Pierre
2018-05-01
The objective of this paper is to apply the Design of Experiments (DoE) method to study and to obtain a predictive model of any marketed monocrystalline photovoltaic (mc-PV) module. This technique allows us to have a mathematical model that represents the predicted responses depending upon input factors and experimental data. Therefore, the DoE model for characterization and modeling of mc-PV module behavior can be obtained by just performing a set of experimental trials. The DoE model of the mc-PV panel evaluates the predictive maximum power, as a function of irradiation and temperature in a bounded domain of study for inputs. For the mc-PV panel, the predictive model for both one level and two levels were developed taking into account both influences of the main effect and the interactive effects on the considered factors. The DoE method is then implemented by developing a code under Matlab software. The code allows us to simulate, characterize, and validate the predictive model of the mc-PV panel. The calculated results were compared to the experimental data, errors were estimated, and an accurate validation of the predictive models was evaluated by the surface response. Finally, we conclude that the predictive models reproduce the experimental trials and are defined within a good accuracy.
Fringe-projection profilometry based on two-dimensional empirical mode decomposition.
Zheng, Suzhen; Cao, Yiping
2013-11-01
In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.
Pregnancy rates after ewes were treated with estradiol-17beta and oxytocin.
USDA-ARS?s Scientific Manuscript database
Cervical dilation may improve transcervical sheep embryo-transfer procedures, if the cervical dilation method does not reduce pregnancy rates. This experiment was conducted to determine whether estradiol-17beta-oxytocin treatment, which dilates the cervix in luteal-phase ewes, affects pregnancy rat...
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Zwissler, Bastian; Koessler, Susanne; Engler, Harald; Schedlowski, Manfred; Kissler, Johanna
2011-03-01
It has been shown that stress affects episodic memory in general, but knowledge about stress effects on memory control processes such as directed forgetting is sparse. Whereas in previous studies item-method directed forgetting was found to be altered in post-traumatic stress disorder patients and abolished for highly arousing negative pictorial stimuli in students, no study so far has investigated the effects of experimentally induced psycho-social stress on this task or examined the role of positive picture stimuli. In the present study, 41 participants performed an item-method directed forgetting experiment while being exposed either to a psychosocial laboratory stressor, the Trier Social Stress Test (TSST), or a cognitively challenging but non-stressful control condition. Neutral and positive pictures were presented as stimuli. As predicted, salivary cortisol level as a biological marker of the human stress response increased only in the TSST group. Still, both groups showed directed forgetting. However, emotional content of the employed stimuli affected memory control: Directed forgetting was intact for neutral pictures whereas it was attenuated for positive ones. This attenuation was primarily due to selective rehearsal improving discrimination accuracy for neutral, but not positive, to-be-remembered items. Results suggest that acute experimentally induced stress does not alter item-method directed forgetting while emotional stimulus content does. Copyright © 2011 Elsevier Inc. All rights reserved.
Cogging Torque Minimization in Transverse Flux Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hasan, Iftekhar; Sozer, Yilmaz
2017-02-16
This paper presents the design considerations in cogging torque minimization in two types of transverse flux machines. The machines have a double stator-single rotor configuration with flux concentrating ferrite magnets. One of the machines has pole windings across each leg of an E-Core stator. Another machine has quasi-U-shaped stator cores and a ring winding. The flux in the stator back iron is transverse in both machines. Different methods of cogging torque minimization are investigated. Key methods of cogging torque minimization are identified and used as design variables for optimization using a design of experiments (DOE) based on the Taguchi method.more » A three-level DOE is performed to reach an optimum solution with minimum simulations. Finite element analysis is used to study the different effects. Two prototypes are being fabricated for experimental verification.« less
Vadose Zone Transport Field Study: Detailed Test Plan for Simulated Leak Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Anderson L.; Gee, Glendon W.
2000-06-23
This report describes controlled transport experiments at well-instrumented field tests to be conducted during FY 2000 in support of DOE?s Vadose Zone Transport Field Study (VZTFS). The VZTFS supports the Groundwater/Vadose Zone Integration Project Science and Technology Initiative. The field tests will improve understanding of field-scale transport and lead to the development or identification of efficient and cost-effective characterization methods. These methods will capture the extent of contaminant plumes using existing steel-cased boreholes. Specific objectives are to 1) identify mechanisms controlling transport processes in soils typical of the hydrogeologic conditions of Hanford?s waste disposal sites; 2) reduce uncertainty in conceptualmore » models; 3) develop a detailed and accurate data base of hydraulic and transport parameters for validation of three-dimensional numerical models; and 4) identify and evaluate advanced, cost-effective characterization methods with the potential to assess changing conditions in the vadose zone, particularly as surrogates of currently undetectable high-risk contaminants. Pacific Northwest National Laboratory (PNNL) manages the VZTFS for DOE.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This document contains a listing, description, and selected references for documented human radiation experiments sponsored, supported, or performed by the US Department of Energy (DOE) or its predecessors, including the US Energy Research and Development Administration (ERDA), the US Atomic Energy Commission (AEC), the Manhattan Engineer District (MED), and the Off ice of Scientific Research and Development (OSRD). The list represents work completed by DOE`s Off ice of Human Radiation Experiments (OHRE) through June 1995. The experiment list is available on the Internet via a Home Page on the World Wide Web (http://www.ohre.doe.gov). The Home Page also includes the fullmore » text of Human Radiation Experiments. The Department of Energy Roadmap to the Story and the Records (DOE/EH-0445), published in February 1995, to which this publication is a supplement. This list includes experiments released at Secretary O`Leary`s June 1994 press conference, as well as additional studies identified during the 12 months that followed. Cross-references are provided for experiments originally released at the press conference; for experiments released as part of The DOE Roadmap; and for experiments published in the 1986 congressional report entitled American Nuclear Guinea Pigs: Three Decades of Radiation Experiments on US Citizens. An appendix of radiation terms is also provided.« less
User Experience and Heritage Preservation
ERIC Educational Resources Information Center
Orfield, Steven J.; Chapman, J. Wesley; Davis, Nathan
2011-01-01
In considering the heritage preservation of higher education campus buildings, much of the attention gravitates toward issues of selection, cost, accuracy, and value, but the model for most preservation projects does not have a clear method of achieving the best solutions for meeting these targets. Instead, it simply relies on the design team and…
Fundamental Studies on Crashworthiness Design with Uncertainties in the System
2005-01-01
studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...Exposed to Impact Load Using a Space Mapping Technique,” Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott
Fundamental Studies on Crashworthiness Design with Uncertainties in the System
2005-01-01
studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...to Impact Load Using a Space Mapping Technique," Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott, R
Students' Presentations: Does the Experience Change Their Views?
ERIC Educational Resources Information Center
Sander, Paul; Sanders, Lalage
2005-01-01
Introduction: Research has shown that students do not like student presentations, yet a case can be made for them. This study seeks to understand the effects that presentations have on students. Method: Within an action research framework, two repeated-measures studies were completed, one with students undertaking assessed presentations the other…
Database Selection: One Size Does Not Fit All.
ERIC Educational Resources Information Center
Allison, DeeAnn; McNeil, Beth; Swanson, Signe
2000-01-01
Describes a strategy for selecting a delivery method for electronic resources based on experiences at the University of Nebraska-Lincoln. Considers local conditions, pricing, feature options, hardware costs, and network availability and presents a model for evaluating the decision based on dollar requirements and local issues. (Author/LRW)
ERIC Educational Resources Information Center
Moehle, Matthew R.
2011-01-01
As teacher education programs further emphasize clinical experiences, the role of university student teaching supervisor becomes increasingly important, as does research on supervision practices. Practitioners and researchers in the fields of positive psychology, management, and teacher education have argued that mentors who employ characteristics…
A practical approach for the scale-up of roller compaction process.
Shi, Weixian; Sprockel, Omar L
2016-09-01
An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Gaggioli, Andrea
2012-01-01
What does one feel when one uses virtual reality? How does this experience differ from the experience associated with "real life" activities and situations? To answer these questions, we used the Experience Sampling Method (ESM), a procedure that allows researchers to investigate the daily fluctuations in the quality of experience through on-line self reports that participants fill out during daily life. The investigation consisted in one-week ESM observation (N = 42). During this week, participants underwent two virtual reality sessions: Immediately after the exposure to virtual environments, they were asked to complete a ESM report. For data analysis, experiential variables were aggregated into four dimensions: Mood, Engagement, Confidence, and Intrinsic Motivation Intrinsic Motivation. Findings showed that virtual experience is characterized by a specific configuration, which comprises significantly positive values for affective and cognitive components. In particular, positive scores of Mood suggest that participants perceived VR as an intrinsically pleasurable activity, while positive values of Engagement indicate that the use of VR and the experimental task provided valid opportunities for action and high skill investment. Furthermore, results showed that virtual experience is associated with Flow, a state of consciousness characterized by narrowed focus of attention, deep concentration, positive affect and intrinsic reward. Implications for VR research and practice are discussed.
Optimal design of gene knockout experiments for gene regulatory network inference
Ud-Dean, S. M. Minhaz; Gunawan, Rudiyanto
2016-01-01
Motivation: We addressed the problem of inferring gene regulatory network (GRN) from gene expression data of knockout (KO) experiments. This inference is known to be underdetermined and the GRN is not identifiable from data. Past studies have shown that suboptimal design of experiments (DOE) contributes significantly to the identifiability issue of biological networks, including GRNs. However, optimizing DOE has received much less attention than developing methods for GRN inference. Results: We developed REDuction of UnCertain Edges (REDUCE) algorithm for finding the optimal gene KO experiment for inferring directed graphs (digraphs) of GRNs. REDUCE employed ensemble inference to define uncertain gene interactions that could not be verified by prior data. The optimal experiment corresponds to the maximum number of uncertain interactions that could be verified by the resulting data. For this purpose, we introduced the concept of edge separatoid which gave a list of nodes (genes) that upon their removal would allow the verification of a particular gene interaction. Finally, we proposed a procedure that iterates over performing KO experiments, ensemble update and optimal DOE. The case studies including the inference of Escherichia coli GRN and DREAM 4 100-gene GRNs, demonstrated the efficacy of the iterative GRN inference. In comparison to systematic KOs, REDUCE could provide much higher information return per gene KO experiment and consequently more accurate GRN estimates. Conclusions: REDUCE represents an enabling tool for tackling the underdetermined GRN inference. Along with advances in gene deletion and automation technology, the iterative procedure brings an efficient and fully automated GRN inference closer to reality. Availability and implementation: MATLAB and Python scripts of REDUCE are available on www.cabsel.ethz.ch/tools/REDUCE. Contact: rudi.gunawan@chem.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26568633
Attention and emotion: does rating emotion alter neural responses to amusing and sad films?
Hutcherson, C A; Goldin, P R; Ochsner, K N; Gabrieli, J D; Barrett, L Feldman; Gross, J J
2005-09-01
Functional neuroimaging of affective systems often includes subjective self-report of the affective response. Although self-report provides valuable information regarding participants' affective responses, prior studies have raised the concern that the attentional demands of reporting on affective experience may obscure neural activations reflecting more natural affective responses. In the present study, we used potent emotion-eliciting amusing and sad films, employed a novel method of continuous self-reported rating of emotion experience, and compared the impact of rating with passive viewing of amusing and sad films. Subjective rating of ongoing emotional responses did not decrease either self-reported experience of emotion or neural activations relative to passive viewing in any brain regions. Rating, relative to passive viewing, produced increased activity in anterior cingulate, insula, and several other areas associated with introspection of emotion. These results support the use of continuous emotion measures and emotionally engaging films to study the dynamics of emotional responding and suggest that there may be some contexts in which the attention to emotion induced by reporting emotion experience does not disrupt emotional responding either behaviorally or neurally.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
Development of a High-Throughput Ion-Exchange Resin Characterization Workflow.
Liu, Chun; Dermody, Daniel; Harris, Keith; Boomgaard, Thomas; Sweeney, Jeff; Gisch, Daryl; Goltz, Bob
2017-06-12
A novel high-throughout (HTR) ion-exchange (IEX) resin workflow has been developed for characterizing ion exchange equilibrium of commercial and experimental IEX resins against a range of different applications where water environment differs from site to site. Because of its much higher throughput, design of experiment (DOE) methodology can be easily applied for studying the effects of multiple factors on resin performance. Two case studies will be presented to illustrate the efficacy of the combined HTR workflow and DOE method. In case study one, a series of anion exchange resins have been screened for selective removal of NO 3 - and NO 2 - in water environments consisting of multiple other anions, varied pH, and ionic strength. The response surface model (RSM) is developed to statistically correlate the resin performance with the water composition and predict the best resin candidate. In case study two, the same HTR workflow and DOE method have been applied for screening different cation exchange resins in terms of the selective removal of Mg 2+ , Ca 2+ , and Ba 2+ from high total dissolved salt (TDS) water. A master DOE model including all of the cation exchange resins is created to predict divalent cation removal by different IEX resins under specific conditions, from which the best resin candidates can be identified. The successful adoption of HTR workflow and DOE method for studying the ion exchange of IEX resins can significantly reduce the resources and time to address industry and application needs.
Low-cost Active Structural Control Space Experiment (LASC)
NASA Technical Reports Server (NTRS)
Robinett, Rush; Bukley, Angelia P.
1992-01-01
The DOE Lab Director's Conference identified the need for the DOE National Laboratories to actively and aggressively pursue ways to apply DOE technology to problems of national need. Space structures are key elements of DOD and NASA space systems and a space technology area in which DOE can have a significant impact. LASC is a joint agency space technology experiment (DOD Phillips, NASA Marshall, and DOE Sandia). The topics are presented in viewgraph form and include the following: phase 4 investigator testbed; control of large flexible structures in orbit; INFLEX; Controls, Astrophysics; and structures experiments in space; SARSAT; and LASC mission objectives.
Measurement of rolling friction by a damped oscillator
NASA Technical Reports Server (NTRS)
Dayan, M.; Buckley, D. H.
1983-01-01
An experimental method for measuring rolling friction is proposed. The method is mechanically simple. It is based on an oscillator in a uniform magnetic field and does not involve any mechanical forces except for the measured friction. The measured pickup voltage is Fourier analyzed and yields the friction spectral response. The proposed experiment is not tailored for a particular case. Instead, various modes of operation, suitable to different experimental conditions, are discussed.
Estimation of gene induction enables a relevance-based ranking of gene sets.
Bartholomé, Kilian; Kreutz, Clemens; Timmer, Jens
2009-07-01
In order to handle and interpret the vast amounts of data produced by microarray experiments, the analysis of sets of genes with a common biological functionality has been shown to be advantageous compared to single gene analyses. Some statistical methods have been proposed to analyse the differential gene expression of gene sets in microarray experiments. However, most of these methods either require threshhold values to be chosen for the analysis, or they need some reference set for the determination of significance. We present a method that estimates the number of differentially expressed genes in a gene set without requiring a threshold value for significance of genes. The method is self-contained (i.e., it does not require a reference set for comparison). In contrast to other methods which are focused on significance, our approach emphasizes the relevance of the regulation of gene sets. The presented method measures the degree of regulation of a gene set and is a useful tool to compare the induction of different gene sets and place the results of microarray experiments into the biological context. An R-package is available.
Does Unit Analysis Help Students Construct Equations?
ERIC Educational Resources Information Center
Reed, Stephen K.
2006-01-01
Previous research has shown that students construct equations for word problems in which many of the terms have no referents. Experiment 1 attempted to eliminate some of these errors by providing instruction on canceling units. The failure of this method was attributed to the cognitive overload (Sweller, 2003) imposed by adding units to the…
ENGLISH LANGUAGE PROGRAMS FOR THE SEVENTIES.
ERIC Educational Resources Information Center
HOOK, J.N.
IT IS NOW THE YEAR 1976, AND CHANGE IN OUR ENGLISH LANGUAGE TEACHING HAS BEEN AFFECTED BY A MODERN AMERICAN REVOLUTION. AS ENGLISH BECOMES MORE UNIVERSAL, SO DOES THE ORAL-AURAL METHOD OF TEACHING IT. IN UNITED STATES CLASSROOMS, CHILDREN PRACTICE ORALLY THOSE PATTERNS THEY NEED, EXPERIMENT WITH WORD ORDER, AND GAIN A KNOWLEDGE OF SENTENCE…
Whose Standards? (B) Reaching the Assessment Puzzle
ERIC Educational Resources Information Center
Polimeni, John M.; Iorgulescu, Raluca I.
2009-01-01
Love it or hate it, assessment has become the new reality on college and university campuses. Although measuring student achievement of course outcomes is not an easy task, assessment does not need to be a complex or painful experience. This paper describes the methods used to assess student achievement of the stated course outcomes in…
Grewe, Oliver; Nagel, Frederik; Kopiez, Reinhard; Altenmüller, Eckart
2005-12-01
Music can arouse ecstatic "chill" experiences defined as "goose pimples" and as "shivers down the spine." We recorded chills both via subjects' self-reports and physiological reactions, finding that they do not occur in a reflex-like manner, but as a result of attentive, experienced, and conscious musical enjoyment.
Visual Literacy: Does It Enhance Leadership Abilities Required for the Twenty-First Century?
ERIC Educational Resources Information Center
Bintz, Carol
2016-01-01
The twenty-first century hosts a well-established global economy, where leaders are required to have increasingly complex skills that include creativity, innovation, vision, relatability, critical thinking and well-honed communications methods. The experience gained by learning to be visually literate includes the ability to see, observe, analyze,…
Inference of missing data and chemical model parameters using experimental statistics
NASA Astrophysics Data System (ADS)
Casey, Tiernan; Najm, Habib
2017-11-01
A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.
Nucleic acid hybridization with RNA immobilized on filter paper.
NASA Technical Reports Server (NTRS)
Saxinger, W. C.; Ponnamperuma, C.; Gillespie, D.
1972-01-01
RNA has been immobilized in a manner suitable for use in molecular hybridization experiments with dissolved RNA or DNA by a nonaqueous solid-phase reaction with carbonyldiimidazole and RNA 'dry coated' on cellulose or, preferably, on previously activated phosphocellulose filters. Immobilization of RNA does not appear to alter its chemical character or cause it to acquire affinity for unspecific RNA or DNA. The versatility and efficiency of this method make it potentially attractive for use in routine analytical or preparative hybridization experiments, among other applications.
Knöspel, Fanny; Schindler, Rudolf K; Lübberstedt, Marc; Petzolt, Stephanie; Gerlach, Jörg C; Zeilinger, Katrin
2010-12-01
The in vitro culture behaviour of embryonic stem cells (ESC) is strongly influenced by the culture conditions. Current culture media for expansion of ESC contain some undefined substances. Considering potential clinical translation work with such cells, the use of defined media is desirable. We have used Design of Experiments (DoE) methods to investigate the composition of a serum-free chemically defined culture medium for expansion of mouse embryonic stem cells (mESC). Factor screening analysis according to Plackett-Burman revealed that insulin and leukaemia inhibitory factor (LIF) had a significant positive influence on the proliferation activity of the cells, while zinc and L: -cysteine reduced the cell growth. Further analysis using minimum run resolution IV (MinRes IV) design indicates that following factor adjustment LIF becomes the main factor for the survival and proliferation of mESC. In conclusion, DoE screening assays are applicable to develop and to refine culture media for stem cells and could also be employed to optimize culture media for human embryonic stem cells (hESC).
Relaxing decision criteria does not improve recognition memory in amnesic patients.
Reber, P J; Squire, L R
1999-05-01
An important question about the organization of memory is whether information available in non-declarative memory can contribute to performance on tasks of declarative memory. Dorfman, Kihlstrom, Cork, and Misiaszek (1995) described a circumstance in which the phenomenon of priming might benefit recognition memory performance. They reported that patients receiving electroconvulsive therapy improved their recognition performance when they were encouraged to relax their criteria for endorsing test items as familiar. It was suggested that priming improved recognition by making information available about the familiarity of test items. In three experiments, we sought unsuccessfully to reproduce this phenomenon in amnesic patients. In Experiment 3, we reproduced the methods and procedure used by Dorfman et al. but still found no evidence for improved recognition memory following the manipulation of decision criteria. Although negative findings have their own limitations, our findings suggest that the phenomenon reported by Dorfman et al. does not generalize well. Our results agree with several recent findings that suggest that priming is independent of recognition memory and does not contribute to recognition memory scores.
A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.
2007-01-01
Design of Experiments (DOE) techniques were applied to the Launch Abort System (LAS) of the NASA Crew Exploration Vehicle (CEV) parametric geometry Computational Fluid Dynamics (CFD) study to efficiently identify and rank the primary contributors to the integrated drag over the vehicles ascent trajectory. Typical approaches to these types of activities involve developing all possible combinations of geometries changing one variable at a time, analyzing them with CFD, and predicting the main effects on an aerodynamic parameter, which in this application is integrated drag. The original plan for the LAS study team was to generate and analyze more than1000 geometry configurations to study 7 geometric parameters. By utilizing DOE techniques the number of geometries was strategically reduced to 84. In addition, critical information on interaction effects among the geometric factors were identified that would not have been possible with the traditional technique. Therefore, the study was performed in less time and provided more information on the geometric main effects and interactions impacting drag generated by the LAS. This paper discusses the methods utilized to develop the experimental design, execution, and data analysis.
NSRD-15:Computational Capability to Substantiate DOE-HDBK-3010 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Bignell, John; Dingreville, Remi Philippe Michel
Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook’s bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived frommore » very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes.« less
Temporal and spatial temperature measurement in insulator-based dielectrophoretic devices.
Nakano, Asuka; Luo, Jinghui; Ros, Alexandra
2014-07-01
Insulator-based dielectrophoresis is a relatively new analytical technique with a large potential for a number of applications, such as sorting, separation, purification, fractionation, and preconcentration. The application of insulator-based dielectrophoresis (iDEP) for biological samples, however, requires the precise control of the microenvironment with temporal and spatial resolution. Temperature variations during an iDEP experiment are a critical aspect in iDEP since Joule heating could lead to various detrimental effects hampering reproducibility. Additionally, Joule heating can potentially induce thermal flow and more importantly can degrade biomolecules and other biological species. Here, we investigate temperature variations in iDEP devices experimentally employing the thermosensitive dye Rhodamin B (RhB) and compare the measured results with numerical simulations. We performed the temperature measurement experiments at a relevant buffer conductivity range commonly used for iDEP applications under applied electric potentials. To this aim, we employed an in-channel measurement method and an alternative method employing a thin film located slightly below the iDEP channel. We found that the temperature does not deviate significantly from room temperature at 100 μS/cm up to 3000 V applied such as in protein iDEP experiments. At a conductivity of 300 μS/cm, such as previously used for mitochondria iDEP experiments at 3000 V, the temperature never exceeds 34 °C. This observation suggests that temperature effects for iDEP of proteins and mitochondria under these conditions are marginal. However, at larger conductivities (1 mS/cm) and only at 3000 V applied, temperature increases were significant, reaching a regime in which degradation is likely to occur. Moreover, the thin layer method resulted in lower temperature enhancement which was also confirmed with numerical simulations. We thus conclude that the thin film method is preferable providing closer agreement with numerical simulations and further since it does not depend on the iDEP channel material. Overall, our study provides a thorough comparison of two experimental techniques for direct temperature measurement, which can be adapted to a variety of iDEP applications in the future. The good agreement between simulation and experiment will also allow one to assess temperature variations for iDEP devices prior to experiments.
2015-01-01
Procedure. The simulated annealing (SA) algorithm is a well-known local search metaheuristic used to address discrete, continuous, and multiobjective...design of experiments (DOE) to tune the parameters of the optimiza- tion algorithm . Section 5 shows the results of the case study. Finally, concluding... metaheuristic . The proposed method is broken down into two phases. Phase I consists of a Monte Carlo simulation to obtain the simulated percentage of failure
Complete solution of electronic excitation and ionization in electron-hydrogen molecule scattering
Zammit, Mark C.; Savage, Jeremy S.; Fursa, Dmitry V.; ...
2016-06-08
The convergent close-coupling method has been used to solve the electron-hydrogen molecule scattering problem in the fixed-nuclei approximation. Excellent agreement with experiment is found for the grand total, elastic, electronic-excitation, and total ionization cross sections from the very low to the very high energies. This shows that for the electronic degrees of freedom the method provides a complete treatment of electron scattering on molecules as it does for atoms.
Stones, H H
1934-04-01
(1) The reaction of cementum and its adjoining tissues to induced pathological conditions associated with the gingival sulcus is described.(2) After subjecting the sulcus to interference, its histological appearance is compared with that of definite parodontal disease.(3) Various methods were adopted for these experiments, which were performed on monkeys.(4) Artificial pockets were produced by detaching the subgingival epithelium and underlying connective tissue from the cementum. (a) Cementum is easily removed accidentally, when scraping monkeys' teeth. (b) Reattachment of connective tissues to cementum is effected, but is usually incomplete. (c) Epithelium always firmly reunites with cementum. (d) The artificial sulcus which is usually deeper than normal does not show, microscopically, the same pathological changes as in parodontal disease.(5) In other experiments, in addition to deepening the sulcus, the cementum lining the pockets was also removed, leaving denuded dentine in contact with the connective tissue. A similar condition was achieved by another method in which a dental bur was inserted between two teeth below the gum margin. (a) The gingival epithelium is capable of forming a weak attachment to the dentine, though this does not usually occur. It always proliferates down and unites with the nearest layer of cementum. It seems to have a peculiar affinity for this tissue. (b) Underlying connective tissue does not usually unite with the dentine. When this happens it is effected by the regeneration of cementum, this new tissue being lined by new cementoblasts. (c) The width of the periodontal membrane, which was increased by the experiment, is reduced to a more normal level by deposition of new alveolar bone, and to a lesser extent by regeneration of cementum. (d) In this series of experiments the artificial pocket is permanent and somewhat resembles that of parodontal disease. This is probably due, not so much to the injury, but to its effects creating a space which forms an area of chronic stagnation.
Carvalho, Rimenys J; Cruz, Thayana A
2018-01-01
High-throughput screening (HTS) systems have emerged as important tools to provide fast and low cost evaluation of several conditions at once since it requires small quantities of material and sample volumes. These characteristics are extremely valuable for experiments with large number of variables enabling the application of design of experiments (DoE) strategies or simple experimental planning approaches. Once, the capacity of HTS systems to mimic chromatographic purification steps was established, several studies were performed successfully including scale down purification. Here, we propose a method for studying different purification conditions that can be used for any recombinant protein, including complex and glycosylated proteins, using low binding filter microplates.
School-Age Children Talk about Chess: Does Knowledge Drive Syntactic Complexity?
ERIC Educational Resources Information Center
Nippold, Marilyn A.
2009-01-01
Purpose: This study examined language productivity and syntactic complexity in school-age children in relation to their knowledge of the topic of discussion--the game of chess. Method: Children (N = 32; mean age = 10;11 [years;months]) who played chess volunteered to be interviewed by an adult examiner who had little or no experience playing…
USDA-ARS?s Scientific Manuscript database
BACKGROUND: Blacks in the U.S. experience among the highest reported prevalence of hypertension (44%) worldwide. However, this does not consider the heterogeneity of Blacks within the U.S., particularly comparing US-born to long-standing or recent immigrants. METHODS: We assessed the prevalence of h...
ERIC Educational Resources Information Center
Feinberg, Melanie; Bullard, Julia; Carter, Daniel
2013-01-01
Introduction: Star and Bowker describe the residual as what does not fit into a category system and as an inevitable byproduct of classification. In this project, we explore what happens when we attempt to give prominence to the residual instead of minimizing it. Methods: The three authors created three "transformations" of a small…
The Educators and the Curriculum: Stories of Progressive Education in the 21st Century
ERIC Educational Resources Information Center
Read, Sally J. W.
2013-01-01
This study, inspired by phenomenological and narrative methods, explored the question, "What does it mean to be a progressive educator in the 21st century?" Rather than a prescriptive piece about what progressive educators should or should not do, this study uses the experiences of three self-identified progressive educators to build a…
Are 20th-Century Methods of Teaching Applicable in the 21st Century?
ERIC Educational Resources Information Center
Bassendowski, Sandra Leigh; Petrucka, Pammla
2013-01-01
The image of students passively absorbing information from an educator who is lecturing from behind a podium does not reflect the current scope and dimension of higher education. There are now tools of technology that can be used to create learning experiences to actively and meaningfully "pull" students into course content. The author…
ERIC Educational Resources Information Center
Little-Wiles, Julie M.
2012-01-01
Using the embedded case study method, this investigation described the experiences, relationships, and perspectives of administrative leaders within the higher education environment during the most recent economic crisis, specifically attempting to answer the question of, "How does an economic crisis, like the most current recession, impact a…
Optical methods for measuring DNA folding
NASA Astrophysics Data System (ADS)
Smith, Adam D.; Ukogu, Obinna A.; Devenica, Luka M.; White, Elizabeth D.; Carter, Ashley R.
2017-03-01
One of the most important biological processes is the dynamic folding and unfolding of deoxyribonucleic acid (DNA). The folding process is crucial for DNA to fit within the boundaries of the cell, while the unfolding process is essential for DNA replication and transcription. To accommodate both processes, the cell employs a highly active folding mechanism that has been the subject of intense study over the last few decades. Still, many open questions remain. What are the pathways for folding or unfolding? How does the folding equilibrium shift? And, what is the energy landscape for a particular process? Here, we review these emerging questions and the in vitro, optical methods that have provided answers, introducing the topic for those physicists seeking to step into biology. Specifically, we discuss two iconic experiments for DNA folding, the tethered particle motion (TPM) experiment and the optical tweezers experiment.
Xu, Zhiliang; Chen, Xu-Yan; Liu, Yingjie
2014-01-01
We present a new formulation of the Runge-Kutta discontinuous Galerkin (RKDG) method [9, 8, 7, 6] for solving conservation Laws with increased CFL numbers. The new formulation requires the computed RKDG solution in a cell to satisfy additional conservation constraint in adjacent cells and does not increase the complexity or change the compactness of the RKDG method. Numerical computations for solving one-dimensional and two-dimensional scalar and systems of nonlinear hyperbolic conservation laws are performed with approximate solutions represented by piecewise quadratic and cubic polynomials, respectively. The hierarchical reconstruction [17, 33] is applied as a limiter to eliminate spurious oscillations in discontinuous solutions. From both numerical experiments and the analytic estimate of the CFL number of the newly formulated method, we find that: 1) this new formulation improves the CFL number over the original RKDG formulation by at least three times or more and thus reduces the overall computational cost; and 2) the new formulation essentially does not compromise the resolution of the numerical solutions of shock wave problems compared with ones computed by the RKDG method. PMID:25414520
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
34 CFR 644.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary evaluate prior experience? 644.22 Section 644.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION EDUCATIONAL OPPORTUNITY CENTERS How Does the...
Design of experiments (DOE) - history, concepts, and relevance to in vitro culture
USDA-ARS?s Scientific Manuscript database
Design of experiments (DOE) is a large and well-developed field for understanding and improving the performance of complex systems. Because in vitro culture systems are complex, but easily manipulated in controlled conditions, they are particularly well-suited for the application of DOE principle...
Székely, György; Henriques, Bruno; Gil, Marco; Alvarez, Carlos
2014-09-01
This paper discusses a design of experiments (DoE) assisted optimization and robustness testing of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development for the trace analysis of the potentially genotoxic 1,3-diisopropylurea (IPU) impurity in mometasone furoate glucocorticosteroid. Compared to the conventional trial-and-error method development, DoE is a cost-effective and systematic approach to system optimization by which the effects of multiple parameters and parameter interactions on a given response are considered. The LC and MS factors were studied simultaneously: flow (F), gradient (G), injection volume (Vinj), cone voltage (E(con)), and collision energy (E(col)). The optimization was carried out with respect to four responses: separation of peaks (Sep), peak area (A(p)), length of the analysis (T), and the signal-to-noise ratio (S/N). An optimization central composite face (CCF) DoE was conducted leading to the early discovery of carry-over effect which was further investigated in order to establish the maximum injectable sample load. A second DoE was conducted in order to obtain the optimal LC-MS/MS method. As part of the validation of the obtained method, its robustness was determined by conducting a fractional factorial of resolution III DoE, wherein column temperature and quadrupole resolution were considered as additional factors. The method utilizes a common Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10-min runtime. The high sensitivity and low limit of quantification (LOQ) was achieved by (1) MRM mode (instead of single ion monitoring) and (2) avoiding the drawbacks of derivatization (incomplete reaction and time-consuming sample preparation). Quantitatively, the DoE method development strategy resulted in the robust trace analysis of IPU at 1.25 ng/mL absolute concentration corresponding to 0.25 ppm LOQ in 5 g/l mometasone furoate glucocorticosteroid. Validation was carried out in a linear range of 0.25-10 ppm and presented a relative standard deviation (RSD) of 1.08% for system precision. Regarding IPU recovery in mometasone furoate, spiked samples produced recoveries between 96 and 109 % in the range of 0.25 to 2 ppm. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
This report presents the results of instrumentation measurements and observations made during construction of the North Ramp Starter Tunnel (NRST) of the Exploratory Studies Facility (ESF). The information in this report was developed as part of the Design Verification Study, Section 8.3.1.15.1.8 of the Yucca Mountain Site Characterization Plan (DOE 1988). The ESF is being constructed by the US Department of Energy (DOE) to evaluate the feasibility of locating a potential high-level nuclear waste repository on lands within and adjacent to the Nevada Test Site (NTS), Nye County, Nevada. The Design Verification Studies are performed to collect information during constructionmore » of the ESF that will be useful for design and construction of the potential repository. Four experiments make up the Design Verification Study: Evaluation of Mining Methods, Monitoring Drift Stability, Monitoring of Ground Support Systems, and The Air Quality and Ventilation Experiment. This report describes Sandia National Laboratories` (SNL) efforts in the first three of these experiments in the NRST.« less
Parametric analysis of plastic strain and force distribution in single pass metal spinning
NASA Astrophysics Data System (ADS)
Choudhary, Shashank; Tejesh, Chiruvolu Mohan; Regalla, Srinivasa Prakash; Suresh, Kurra
2013-12-01
Metal spinning also known as spin forming is one of the sheet metal working processes by which an axis-symmetric part can be formed from a flat sheet metal blank. Parts are produced by pressing a blunt edged tool or roller on to the blank which in turn is mounted on a rotating mandrel. This paper discusses about the setting up a 3-D finite element simulation of single pass metal spinning in LS-Dyna. Four parameters were considered namely blank thickness, roller nose radius, feed ratio and mandrel speed and the variation in forces and plastic strain were analysed using the full-factorial design of experiments (DOE) method of simulation experiments. For some of these DOE runs, physical experiments on extra deep drawing (EDD) sheet metal were carried out using En31 tool on a lathe machine. Simulation results are able to predict the zone of unsafe thinning in the sheet and high forming forces that are hint to the necessity for less-expensive and semi-automated machine tools to help the household and small scale spinning workers widely prevalent in India.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-02-20
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.
The 400 microsphere per piece "rule" does not apply to all blood flow studies.
Polissar, N L; Stanford, D C; Glenny, R W
2000-01-01
Microsphere experiments are useful in measuring regional organ perfusion as well as heterogeneity of blood flow within organs and correlation of perfusion between organ pieces at different time points. A 400 microspheres/piece "rule" is often used in planning experiments or to determine whether experiments are valid. This rule is based on the statement that 400 microspheres must lodge in a region for 95% confidence that the observed flow in the region is within 10% of the true flow. The 400 microspheres precision rule, however, only applies to measurements of perfusion to a single region or organ piece. Examples, simulations, and an animal experiment were carried out to show that good precision for measurements of heterogeneity and correlation can be obtained from many experiments with <400 microspheres/piece. Furthermore, methods were developed and tested for correcting the observed heterogeneity and correlation to remove the Poisson "noise" due to discrete microsphere measurements. The animal experiment shows adjusted values of heterogeneity and correlation that are in close agreement for measurements made with many or few microspheres/piece. Simulations demonstrate that the adjusted values are accurate for a variety of experiments with far fewer than 400 microspheres/piece. Thus the 400 microspheres rule does not apply to many experiments. A "rule of thumb" is that experiments with a total of at least 15,000 microspheres, for all pieces combined, are very likely to yield accurate estimates of heterogeneity. Experiments with a total of at least 25,000 microspheres are very likely to yield accurate estimates of correlation coefficients.
ERIC Educational Resources Information Center
Barber, Brian K.
2013-01-01
Aims and Method: Drawing on empirical studies and literature reviews, this paper aims to clarify and qualify the relevance of resilience to youth experiencing political conflict. It focuses on the discordance between expectations of widespread dysfunction among conflict-affected youth and a body of empirical evidence that does not confirm these…
The Impact of Graduate First Project on Students with Disabilities: Perceptions of Key Personnel
ERIC Educational Resources Information Center
Foley, Tamera Garrett
2012-01-01
The Graduate First initiative was implemented to address the dropout crisis among students with disabilities in the state of Georgia, who continue to demonstrate a rate of attrition twice that of their non-disabled peers (Georgia Department of Education [GA DOE], 2010). This mixed method case study explored the perceptions and experiences of a…
Lessons from an Experiential Learning Process: The Case of Cowpea Farmer Field Schools in Ghana
ERIC Educational Resources Information Center
Nederlof, E. Suzanne; Odonkor, Ezekiehl N.
2006-01-01
The Farmer Field School (FFS) is a form of adult education using experiential learning methods, aimed at building farmers' decision-making capacity and expertise. The National Research Institute in West Africa conducted FFS in cowpea cultivation and we use this experience to analyse the implementation of the FFS approach. How does it work in…
What is freedom--and does wealth cause it?
Iyer, Ravi; Motyl, Matt; Graham, Jesse
2013-10-01
The target article's climato-economic theory will benefit by allowing for bidirectional effects and the heterogeneity of types of freedom, in order to more fully capture the coevolution of societal wealth and freedom. We also suggest alternative methods of testing climato-economic theory, such as longitudinal analyses of these countries' histories and micro-level experiments of each of the theory's hypotheses.
ERIC Educational Resources Information Center
Hurst, Judith; Quinsee, Susannah
2005-01-01
The inclusion of online learning technologies into the higher education (HE) curriculum is frequently associated with the design and development of new models of learning. One could argue that e-learning even demands a reconfiguration of traditional methods of learning and teaching. However, this transformation in pedagogic methodology does not…
Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Haller, Harold S.
2009-01-01
It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.
Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A
2013-01-01
When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.
Controlled experiments for dense gas diffusion: Experimental design and execution, model comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egami, R.; Bowen, J.; Coulombe, W.
1995-07-01
An experimental baseline CO2 release experiment at the DOE Spill Test Facility on the Nevada Test Site in Southern Nevada is described. This experiment was unique in its use of CO2 as a surrogate gas representative of a variety of specific chemicals. Introductory discussion places the experiment in historical perspective. CO2 was selected as a surrogate gas to provide a data base suitable for evaluation of model scenarios involving a variety of specific dense gases. The experiment design and setup are described, including design rationale and quality assurance methods employed. Resulting experimental data are summarized. Data usefulness is examined throughmore » a preliminary comparison of experimental results with simulations performed using the SLAV and DEGADIS dense gas models.« less
NASA Astrophysics Data System (ADS)
Lingadurai, K.; Nagasivamuni, B.; Muthu Kamatchi, M.; Palavesam, J.
2012-06-01
Wire electrical discharge machining (WEDM) is a specialized thermal machining process capable of accurately machining parts of hard materials with complex shapes. Parts having sharp edges that pose difficulties to be machined by the main stream machining processes can be easily machined by WEDM process. Design of Experiments approach (DOE) has been reported in this work for stainless steel AISI grade-304 which is used in cryogenic vessels, evaporators, hospital surgical equipment, marine equipment, fasteners, nuclear vessels, feed water tubing, valves, refrigeration equipment, etc., is machined by WEDM with brass wire electrode. The DOE method is used to formulate the experimental layout, to analyze the effect of each parameter on the machining characteristics, and to predict the optimal choice for each WEDM parameter such as voltage, pulse ON, pulse OFF and wire feed. It is found that these parameters have a significant influence on machining characteristic such as metal removal rate (MRR), kerf width and surface roughness (SR). The analysis of the DOE reveals that, in general the pulse ON time significantly affects the kerf width and the wire feed rate affects SR, while, the input voltage mainly affects the MRR.
Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.
Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian
2018-03-01
We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.
Model-based multi-fringe interferometry using Zernike polynomials
NASA Astrophysics Data System (ADS)
Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan
2018-06-01
In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.
An Improved Heuristic Method for Subgraph Isomorphism Problem
NASA Astrophysics Data System (ADS)
Xiang, Yingzhuo; Han, Jiesi; Xu, Haijiang; Guo, Xin
2017-09-01
This paper focus on the subgraph isomorphism (SI) problem. We present an improved genetic algorithm, a heuristic method to search the optimal solution. The contribution of this paper is that we design a dedicated crossover algorithm and a new fitness function to measure the evolution process. Experiments show our improved genetic algorithm performs better than other heuristic methods. For a large graph, such as a subgraph of 40 nodes, our algorithm outperforms the traditional tree search algorithms. We find that the performance of our improved genetic algorithm does not decrease as the number of nodes in prototype graphs.
NASA Astrophysics Data System (ADS)
Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.
2018-01-01
In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.
Epidemiologic methods in clinical trials.
Rothman, K J
1977-04-01
Epidemiologic methods developed to control confounding in non-experimental studies are equally applicable for experiments. In experiments, most confounding is usually controlled by random allocation of subjects to treatment groups, but randomization does not preclude confounding except for extremely large studies, the degree of confounding expected being inversely related to the size of the treatment groups. In experiments, as in non-experimental studies, the extent of confounding for each risk indicator should be assessed, and if sufficiently large, controlled. Confounding is properly assessed by comparing the unconfounded effect estimate to the crude effect estimate; a common error is to assess confounding by statistical tests of significance. Assessment of confounding involves its control as a prerequisite. Control is most readily and cogently achieved by stratification of the data, though with many factors to control simultaneously, multivariate analysis or a combination of multivariate analysis and stratification might be necessary.
The Impact of Preceptor and Student Learning Styles on Experiential Performance Measures
Cox, Craig D.; Seifert, Charles F.
2012-01-01
Objectives. To identify preceptors’ and students’ learning styles to determine how these impact students’ performance on pharmacy practice experience assessments. Methods. Students and preceptors were asked to complete a validated Pharmacist’s Inventory of Learning Styles (PILS) questionnaire to identify dominant and secondary learning styles. The significance of “matched” and “unmatched” learning styles between students and preceptors was evaluated based on performance on both subjective and objective practice experience assessments. Results. Sixty-one percent of 67 preceptors and 57% of 72 students who participated reported “assimilator” as their dominant learning style. No differences were found between student and preceptor performance on evaluations, regardless of learning style match. Conclusion. Determination of learning styles may encourage preceptors to use teaching methods to challenge students during pharmacy practice experiences; however, this does not appear to impact student or preceptor performance. PMID:23049100
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
Yazdi, Ashkan K; Smyth, Hugh D C
2017-03-01
To optimize air-jet milling conditions of ibuprofen (IBU) using design of experiment (DoE) method, and to test the generalizability of the optimized conditions for the processing of another non-steroidal anti-inflammatory drug (NSAID). Bulk IBU was micronized using an Aljet mill according to a circumscribed central composite (CCC) design with grinding and pushing nozzle pressures (GrindP, PushP) varying from 20 to 110 psi. Output variables included yield and particle diameters at the 50th and 90th percentile (D 50 , D 90 ). Following data analysis, the optimized conditions were identified and tested to produce IBU particles with a minimum size and an acceptable yield. Finally, indomethacin (IND) was milled using the optimized conditions as well as the control. CCC design included eight successful runs for milling IBU from the ten total runs due to powder "blowback" from the feed hopper. DoE analysis allowed the optimization of the GrindP and PushP at 75 and 65 psi. In subsequent validation experiments using the optimized conditions, the experimental D 50 and D 90 values (1.9 and 3.6 μm) corresponded closely with the DoE modeling predicted values. Additionally, the optimized conditions were superior over the control conditions for the micronization of IND where smaller D 50 and D 90 values (1.2 and 2.7 μm vs. 1.8 and 4.4 μm) were produced. The optimization of a single-step air-jet milling of IBU using the DoE approach elucidated the optimal milling conditions, which were used to micronize IND using the optimized milling conditions.
Water vapour tomography using GPS phase observations: Results from the ESCOMPTE experiment
NASA Astrophysics Data System (ADS)
Nilsson, T.; Gradinarsky, L.; Elgered, G.
2007-10-01
Global Positioning System (GPS) tomography is a technique for estimating the 3-D structure of the atmospheric water vapour using data from a dense local network of GPS receivers. Several current methods utilize estimates of slant wet delays between the GPS satellites and the receivers on the ground, which are difficult to obtain with millimetre accuracy from the GPS observations. We present results of applying a new tomographic method to GPS data from the Expériance sur site pour contraindre les modèles de pollution atmosphérique et de transport d'emissions (ESCOMPTE) experiment in southern France. This method does not rely on any slant wet delay estimates, instead it uses the GPS phase observations directly. We show that the estimated wet refractivity profiles estimated by this method is on the same accuracy level or better compared to other tomographic methods. The results are in agreement with earlier simulations, for example the profile information is limited above 4 km.
Numerical study of the vortex tube reconnection using vortex particle method on many graphics cards
NASA Astrophysics Data System (ADS)
Kudela, Henryk; Kosior, Andrzej
2014-08-01
Vortex Particle Methods are one of the most convenient ways of tracking the vorticity evolution. In the article we presented numerical recreation of the real life experiment concerning head-on collision of two vortex rings. In the experiment the evolution and reconnection of the vortex structures is tracked with passive markers (paint particles) which in viscous fluid does not follow the evolution of vorticity field. In numerical computations we showed the difference between vorticity evolution and movement of passive markers. The agreement with the experiment was very good. Due to problems with very long time of computations on a single processor the Vortex-in-Cell method was implemented on the multicore architecture of the graphics cards (GPUs). Vortex Particle Methods are very well suited for parallel computations. As there are myriads of particles in the flow and for each of them the same equations of motion have to be solved the SIMD architecture used in GPUs seems to be perfect. The main disadvantage in this case is the small amount of the RAM memory. To overcome this problem we created a multiGPU implementation of the VIC method. Some remarks on parallel computing are given in the article.
Arumugam, Abiramasundari; Joshi, Amita; Vasu, Kamala K
2017-11-01
The present work focused on the application of design of experiment (DoE) principles to the development and optimization of a stability-indicating method (SIM) for the drug imidapril hydrochloride and its degradation products (DPs). The resolution of peaks for the DPs and their drug in a SIM can be influenced by many factors. The factors studied here were pH, gradient time, organic modifier, flow rate, molar concentration of the buffer, and wavelength, with the aid of a Plackett-Burman design. Results from the Plackett-Burman study conspicuously showed influence of two factors, pH and gradient time, on the analyzed response, particularly, the resolution of the closely eluting DPs (DP-5 and DP-6) and the retention time of the last peak. Optimization of the multiresponse processes was achieved through Derringer's desirability function with the assistance of a full factorial design. Separation was achieved using a C18 Phenomenex Luna column (250 × 4.6 mm id, 5 µm particle size) at a flow rate of 0.8 mL/min at 210 nm. The optimized mobile phase composition was ammonium-acetate buffer (pH 5) in pump A and acetonitrile-methanol (in equal ratio) in pump B with a run time of 40 min using a gradient method.
Social Networking Sites' Influence on Travelers' Authentic Experience a Case Study of Couch Surfing
ERIC Educational Resources Information Center
Liu, Xiao
2013-01-01
This study explored travelers' experiences in the era of network hospitality 2.0 using CouchSurfing.org as a case study. The following research questions guided this study: 1) what experience does CouchSurfing create for travelers before, during and after their travel? 2) how does couch surfers' experience relate to authenticity in context of…
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
Does the continuum theory of dynamic fracture work?
NASA Astrophysics Data System (ADS)
Kessler, David A.; Levine, Herbert
2003-09-01
We investigate the validity of the linear elastic fracture mechanics approach to dynamic fracture. We first test the predictions in a lattice simulation, using a formula of Eshelby for the time-dependent stress intensity factor. Excellent agreement with the theory is found. We then use the same method to analyze the experiment of Sharon and Fineberg. The data here are not consistent with the theoretical expectation.
ERIC Educational Resources Information Center
Rhee, Jeong-eun
2013-01-01
This project began as a content analysis of five South Korean high school Social Studies textbooks. Yet, it has evolved into an epistemological experiment to pursue the question of "what does it mean to leave America for Asia, at least methodologically, for the researcher who left Asia for America?" Using the textbooks as a mediating…
Next-Generation NATO Reference Mobility Model (NG-NRMM)
2016-05-11
facilitate comparisons between vehicle design candidates and to assess the mobility of existing vehicles under specific scenarios. Although NRMM has...of different deployed platforms in different areas of operation and routes Improved flexibility as a design and procurement support tool through...Element Method DEM Digital Elevation Model DIL Driver in the Loop DP Drawbar Pull Force DOE Design of Experiments DTED Digital Terrain Elevation Data
ERIC Educational Resources Information Center
Evans, Rosalind
2013-01-01
This article responds to Wright and Nelson's (1995) call for a "creative synthesis" of participant observation and participatory research, which may allow the limitations of both methods to be addressed. It does so by reflecting on the experience of doing long-term research both with and on young Bhutanese refugees in Nepal. Although…
Healthcare rationing: issues and implications.
Cypher, D P
1997-01-01
What methods, if any, should be used to practice healthcare rationing? This article looks at healthcare rationing in the United States, identifies ethical issues associated with implementing healthcare rationing, and addresses legal implications. The author utilizes sources from published literature and her own experience. Society must recognize that it does not have the resources available to fulfill all healthcare needs of all its members. Resolution will bring conflict and compromise.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-01-01
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763
NASA Technical Reports Server (NTRS)
Buchner, S.; LaBel, K.; Barth, J.; Campbell, A.
2005-01-01
Space experiments are occasionally launched to study the effects of radiation on electronic and photonic devices. This begs the following questions: Are space experiments necessary? Do the costs justify the benefits? How does one judge success of space experiment? What have we learned from past space experiments? How does one design a space experiment? This viewgraph presentation provides information on the usefulness of space and ground tests for simulating radiation damage to spacecraft components.
In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images.
Christiansen, Eric M; Yang, Samuel J; Ando, D Michael; Javaherian, Ashkan; Skibinski, Gaia; Lipnick, Scott; Mount, Elliot; O'Neil, Alison; Shah, Kevan; Lee, Alicia K; Goyal, Piyush; Fedus, William; Poplin, Ryan; Esteva, Andre; Berndl, Marc; Rubin, Lee L; Nelson, Philip; Finkbeiner, Steven
2018-04-19
Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary perturbations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call "in silico labeling" (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Li, Z. K.
1985-01-01
A specialized program was developed for flow cytometric list-mode data using an heirarchical tree method for identifying and enumerating individual subpopulations, the method of principal components for a two-dimensional display of 6-parameter data array, and a standard sorting algorithm for characterizing subpopulations. The program was tested against a published data set subjected to cluster analysis and experimental data sets from controlled flow cytometry experiments using a Coulter Electronics EPICS V Cell Sorter. A version of the program in compiled BASIC is usable on a 16-bit microcomputer with the MS-DOS operating system. It is specialized for 6 parameters and up to 20,000 cells. Its two-dimensional display of Euclidean distances reveals clusters clearly, as does its 1-dimensional display. The identified subpopulations can, in suitable experiments, be related to functional subpopulations of cells.
Negotiating Parenthood: Experiences of Economic Hardship among Parents with Cognitive Difficulties
ERIC Educational Resources Information Center
Fernqvist, Stina
2015-01-01
People with cognitive difficulties often have scarce economic resources, and parents with cognitive difficulties are no exception. In this article, parents' experiences are put forth and discussed, for example, how does economic hardship affect family life? How do the parents experience support, what kind of strain does the scarce economy put on…
1981-12-01
I I I I I o-F--o -- oIl lI I I 0--0------0I Im I I o--G--o ] II I I ...C-0076, the Department of Energy (DOE Grant DE-AC02-77ET53053), The National Science Foundation (Graduate Fellowship), and Yale University. " i o V.IM...element method, the choice of discretization i eft to the user, who must base his decision on experience with similar equations. - In recent years,
Combined Use of Integral Experiments and Covariance Data
NASA Astrophysics Data System (ADS)
Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.
2014-04-01
In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan-face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan-face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3- Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCP(sub avg), the circumferential distortion level at the engine fan-face.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R., Jr.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3-Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCPavg, the circumferential distortion level at the engine fan face.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Scaling of Two-Phase Flows to Partial-Earth Gravity
NASA Technical Reports Server (NTRS)
Hurlbert, Kathryn M.; Witte, Larry C.
2003-01-01
A report presents a method of scaling, to partial-Earth gravity, of parameters that describe pressure drops and other characteristics of two-phase (liquid/ vapor) flows. The development of the method was prompted by the need for a means of designing two-phase flow systems to operate on the Moon and on Mars, using fluid-properties and flow data from terrestrial two-phase-flow experiments, thus eliminating the need for partial-gravity testing. The report presents an explicit procedure for designing an Earth-based test bed that can provide hydrodynamic similarity with two-phase fluids flowing in partial-gravity systems. The procedure does not require prior knowledge of the flow regime (i.e., the spatial orientation of the phases). The method also provides for determination of pressure drops in two-phase partial-gravity flows by use of a generalization of the classical Moody chart (previously applicable to single-phase flow only). The report presents experimental data from Mars- and Moon-activity experiments that appear to demonstrate the validity of this method.
Daly, Tamara; Banerjee, Albert; Armstrong, Pat; Armstrong, Hugh; Szebehely, Marta
2011-06-01
We conducted a mixed-methods study-- the focus of this article--to understand how workers in long-term care facilities experienced working conditions. We surveyed unionized care workers in Ontario (n = 917); we also surveyed workers in three Canadian provinces (n = 948) and four Scandinavian countries (n = 1,625). In post-survey focus groups, we presented respondents with survey questions and descriptive statistical findings, and asked them: "Does this reflect your experience?" Workers reported time pressures and the frequency of experiences of physical violence and unwanted sexual attention, as we explain. We discuss how iteratively mixing qualitative and quantitative methods to triangulate survey and focus group results led to expected data convergence and to unexpected data divergence that revealed a normalized culture of structural violence in long-term care facilities. We discuss how the finding of structural violence emerged and also the deeper meaning, context, and insights resulting from our combined methods.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
Experimental Design of a UCAV-Based High-Energy Laser Weapon
2016-12-01
propagation. The Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their... Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their effect on the...73 A. DESIGN OF EXPERIMENTS METHODOLOGY .............................73 B. OPERATIONAL CONCEPT
ERIC Educational Resources Information Center
Hamid, Jamaliah Abdul; Krauss, Steven E.
2013-01-01
Do students' experiences on university campuses cultivate motivation to lead or a sense of readiness to lead that does not necessarily translate to active leadership? To address this question, a study was conducted with 369 undergraduates from Malaysia. Campus experience was more predictive of leadership readiness than motivation. Student…
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Recording 2-D Nutation NQR Spectra by Random Sampling Method
Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw
2010-01-01
The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121
NASA Astrophysics Data System (ADS)
Zarkevich, Nikolai A.; Johnson, Duane D.
2015-03-01
Materials under pressure may exhibit critical electronic and structural transitions that affect equation of states, as known for superconductors and the magneto-structural transformations of iron with both geophysical and planetary implications. While experiments often use constant-pressure (diamond-anvil cell, DAC) measurements, many theoretical results address a constant-volume transitions, which avoid issues with magnetic collapse but cannot be directly compared to experiment. We establish a modified solid-state nudge elastic band (MSS-NEB) method to handle magnetic systems that may exhibit moment (and volume) collapse during transformation. We apply it to the pressure-induced transformation in iron between the low-pressure body-centered cubic (bcc) and the high-pressure hexagonal close-packed (hcp) phases, find the bcc-hcp equilibrium coexistence pressure and a transitional pathway, and compare to shock and DAC experiments. We use methods developed with support by the U.S. Department of Energy (DE-FG02-03ER46026 and DE-AC02-07CH11358). Ames Laboratory is operated for the DOE by Iowa State University under contract DE-AC02-07CH11358.
Surface Passivation in Empirical Tight Binding
NASA Astrophysics Data System (ADS)
He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann
2016-03-01
Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.
ERIC Educational Resources Information Center
Roth, Wolff-Michael
2015-01-01
For many students, the experience with science tends to be alienating and uprooting. In this study, I take up Simone Weil's concepts of "enracinement" (rooting) and "déracinement" (uprooting) to theorize the root of this alienation, the confrontation between children's familiarity with the world and unfamiliar/strange…
NASA Astrophysics Data System (ADS)
Anis Atikah, Nurul; Yeng Weng, Leong; Anuar, Adzly; Chien Fat, Chau; Sahari, Khairul Salleh Mohamed; Zainal Abidin, Izham
2017-10-01
Currently, the methods of actuating robotic-based prosthetic limbs are moving away from bulky actuators to more fluid materials such as artificial muscles. The main disadvantages of these artificial muscles are their high cost of manufacturing, low-force generation, cumbersome and complex controls. A recent discovery into using super coiled polymer (SCP) proved to have low manufacturing costs, high force generation, compact and simple controls. Nevertheless, the non-linear controls still exists due to the nature of heat-based actuation, which is hysteresis. This makes position control difficult. Using electrically conductive devices allows for very quick heating, but not quick cooling. This research tries to solve the problem by using peltier devices, which can effectively heat and cool the SCP, hence giving way to a more precise control. The peltier device does not actively introduce more energy to a volume of space, which the coiled heating does; instead, it acts as a heat pump. Experiments were conducted to test the feasibility of using peltier as an actuating method on different diameters of nylon fishing strings. Based on these experiments, the performance characteristics of the strings were plotted, which could be used to control the actuation of the string efficiently in the future.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
NASA Technical Reports Server (NTRS)
1995-01-01
This guidebook, the second of a two-volume series, is intended to facilitate the transfer of formal methods to the avionics and aerospace community. The 1st volume concentrates on administrative and planning issues [NASA-95a], and the second volume focuses on the technical issues involved in applying formal methods to avionics and aerospace software systems. Hereafter, the term "guidebook" refers exclusively to the second volume of the series. The title of this second volume, A Practitioner's Companion, conveys its intent. The guidebook is written primarily for the nonexpert and requires little or no prior experience with formal methods techniques and tools. However, it does attempt to distill some of the more subtle ingredients in the productive application of formal methods. To the extent that it succeeds, those conversant with formal methods will also nd the guidebook useful. The discussion is illustrated through the development of a realistic example, relevant fragments of which appear in each chapter. The guidebook focuses primarily on the use of formal methods for analysis of requirements and high-level design, the stages at which formal methods have been most productively applied. Although much of the discussion applies to low-level design and implementation, the guidebook does not discuss issues involved in the later life cycle application of formal methods.
Emilio Segrè, the Antiproton, Technetium, and Astatine
of U238, DOE Technical Report, 1942 Spontaneous Fission, DOE Technical Report, November 1950 Observation of Antiprotons, DOE Technical Report, October 1955 Antiprotons, DOE Technical Report, November 1955 The Antiproton-Nucleon Annihilation Process (Antiproton Collaboration Experiment), DOE Technical
Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning
NASA Astrophysics Data System (ADS)
Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.
1992-07-01
Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.
Hayashi, Yoshihiro; Kosugi, Atsushi; Miura, Takahiro; Takayama, Kozo; Onuki, Yoshinori
2018-01-01
The influence of granule size on simulation parameters and residual shear stress in tablets was determined by combining the finite element method (FEM) into the design of experiments (DoE). Lactose granules were prepared using a wet granulation method with a high-shear mixer and sorted into small and large granules using sieves. To simulate the tableting process using the FEM, parameters simulating each granule were optimized using a DoE and a response surface method (RSM). The compaction behavior of each granule simulated by FEM was in reasonable agreement with the experimental findings. Higher coefficients of friction between powder and die/punch (μ) and lower by internal friction angle (α y ) were generated in the case of small granules, respectively. RSM revealed that die wall force was affected by α y . On the other hand, the pressure transmissibility rate of punches value was affected not only by the α y value, but also by μ. The FEM revealed that the residual shear stress was greater for small granules than for large granules. These results suggest that the inner structure of a tablet comprising small granules was less homogeneous than that comprising large granules. To evaluate the contribution of the simulation parameters to residual stress, these parameters were assigned to the fractional factorial design and an ANOVA was applied. The result indicated that μ was the critical factor influencing residual shear stress. This study demonstrates the importance of combining simulation and statistical analysis to gain a deeper understanding of the tableting process.
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
NASA Astrophysics Data System (ADS)
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-05-01
Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.
An in vitro study of the effects of low-level laser radiation on human blood
NASA Astrophysics Data System (ADS)
Siposan, Dan G.; Lukacs, Adalbert
2003-12-01
In the last time the study of the effects of LLLR on the blood is considered to be a subject of great importance in elucidating the mechanisms of action between LLLR and biologic tissues. Different methods of therapy by blood irradiation have been developed and used in clinical purposes with benefic effects. This study investigates some in vitro effects of LLLR on some selected rheologic indices of human blood. After establishing whether or not damaging effects could appear due to laser irradiation of the blood, we tried to find a new method for rejuvenating the blood preserved in MacoPharma-type bags. Blood samples were obtained from adult regular donors (volunteers). HeNe laser and laser diodes were used as radiation source, in a wide range of wavelengths, power densities, doses and other parameters of irradiation protocol. In the first series of experiments we established that LLLR does not alter the fresh blood from healthy donors, for doses between 0 and 10 J/cm3 and power densities between 30 and 180 mW/cm3. In the second series of experiments we established that LLLR does have, in some specific conditions, a revitalizing effect on the erythrocytes in preserved blood. We concluded that laser irradiation of the preserved blood, following a selected protocol of irradiation, could be used as a new method to improve the performances of preservation: prolonging the period of storage and blood rejuvenation before transfusion.
Stripe nonuniformity correction for infrared imaging system based on single image optimization
NASA Astrophysics Data System (ADS)
Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai
2018-06-01
Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.
Determination of antenna factors using a three-antenna method at open-field test site
NASA Astrophysics Data System (ADS)
Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao
1992-09-01
Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.
NASA Glenn Research Center Experience Using DOE Midwest Region Super ESPC
NASA Technical Reports Server (NTRS)
Zala, Laszlo F.
2000-01-01
The energy crisis of 1973 prompted the Federal Government and private industry to look into alternative methods to save energy. At the same time the constant reduction of operations and maintenance funds during the last 5 years forced Glenn Research Center (GRC) to look for alternative funding sources to meet the mandate to reduce energy consumption. The Super Energy Savings Performance Contract (ESPC) was chosen as a viable source of facility improvement funding that can create larger project scope and help replace aging, inefficient equipment. This paper describes Glenn's participation in the Department of Energy (DOE) Super ESPC program. This program provided Glenn cost savings in the performance of energy audits, preparation of documents, evaluation of proposals, and selection of energy service company (ESCO).
Sensorless battery temperature measurements based on electrochemical impedance spectroscopy
NASA Astrophysics Data System (ADS)
Raijmakers, L. H. J.; Danilov, D. L.; van Lammeren, J. P. M.; Lammers, M. J. G.; Notten, P. H. L.
2014-02-01
A new method is proposed to measure the internal temperature of (Li-ion) batteries. Based on electrochemical impedance spectroscopy measurements, an intercept frequency (f0) can be determined which is exclusively related to the internal battery temperature. The intercept frequency is defined as the frequency at which the imaginary part of the impedance is zero (Zim = 0), i.e. where the phase shift between the battery current and voltage is absent. The advantage of the proposed method is twofold: (i) no hardware temperature sensors are required anymore to monitor the battery temperature and (ii) the method does not suffer from heat transfer delays. Mathematical analysis of the equivalent electrical-circuit, representing the battery performance, confirms that the intercept frequency decreases with rising temperatures. Impedance measurements on rechargeable Li-ion cells of various chemistries were conducted to verify the proposed method. These experiments reveal that the intercept frequency is clearly dependent on the temperature and does not depend on State-of-Charge (SoC) and aging. These impedance-based sensorless temperature measurements are therefore simple and convenient for application in a wide range of stationary, mobile and high-power devices, such as hybrid- and full electric vehicles.
ERIC Educational Resources Information Center
Bradford, Gyndolyn
2017-01-01
The idea of mentoring in higher education is considered a good thing for students and faculty. What is missing in the research is how does mentoring influence and shape the student experience, does mentoring help retention, and how does it contribute to student development? (Crisp, Baker, Griffin, Lunsford, Pifer, 2017). The mentoring relationship…
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
Makrakis, Vassilios; Kostoulas-Makrakis, Nelly
2016-02-01
Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Testing the Rotation Stage in the ARIADNE Axion Experiment
NASA Astrophysics Data System (ADS)
Dargert, Jordan; Lohmeyer, Chloe; Harkness, Mindy; Cunningham, Mark; Fosbinder-Elkins, Harry; Geraci, Andrew; Ariadne Collaboration
2017-04-01
The Axion Resonant InterAction Detection Experiment (ARIADNE) will search for the Peccei-Quinn (PQ) axion, a hypothetical particle that is a dark matter candidate. Using a new technique based on Nuclear Magnetic Resonance, this new method can probe well into the allowed PQ axion mass range. Additionally, it does not rely on cosmological assumptions, meaning that the PQ Axion would be sourced locally. Our project relies on the stability of a rotating segmented source mass and superconducting magnetic shielding. Superconducting shielding is essential for limiting magnetic noise, thus allowing a feasible level of sensitivity required for PQ Axion detection. Progress on testing the stability of the rotary mechanism will be reported, and the design for the superconducting shielding in the experiment will be discussed, along with plans for moving the experiment forward. NSF Grant PHY-1509176.
Level Lifetime Measurements in ^150Sm
NASA Astrophysics Data System (ADS)
Barton, C. J.; Krücken, R.; Beausang, C. W.; Caprio, M. A.; Casten, R. F.; Cooper, J. R.; Hecht, A. A.; Newman, H.; Novak, J. R.; Pietralla, N.; Wolf, A.; Zyromski, K. E.; Zamfir, N. V.; Börner, H. G.
2000-10-01
Shape/phase coexistence and the evolution of structure in the region around ^152Sm have recently been of great interest. Experiments performed at WNSL, Yale University, measured the lifetime of low spin states in a target of ^150Sm with the recoil distance method (RDM) and the Doppler-shift attenuation method (DSAM). The low spin states, both yrast and non-yrast, were populated via Coulomb excitation with a beam of ^16O. The experiments were performed with the NYPD plunger in conjunction with the SPEEDY γ-ray array. The SCARY array of solar cells was used to detect backward scattered projectiles, selecting forward flying Coulomb excited target nuclei. The measured lifetimes yield, for example, B(E2) values for transitions such as the 2^+2 arrow 2^+1 and the 2^+3 arrow 0^+_1. Data from the RDM measurment and the DSAM experiment will be presented. This work was supported by the US DOE under grants DE-FG02-91ER-40609 and DE-FG02-88ER-40417.
[Reinvestment in health: fundamentals, clarifications, experiences and perspectives].
Campillo-Artero, Carlos; Bernal-Delgado, Enrique
2013-01-01
During the economic crisis, the pressure to reduce health services expenditure as an isolated measure is greater than measures intended to increase the efficiency of these services. Information, methods and experiences to improve health outcomes with limited resources are available and a number of countries have been applying measures to achieve this goal. One of these measures is disinvestment. Given that this tactic is necessary but also intricate, allergenic and confusing, this article tries to clarify its meaning, place it in its correct context, and describe the methods and criteria used to identify and prioritize candidate medical technologies for disinvestment. The experiences of Spain, New Zealand, Australia, Canada, the United Kingdom and Italy in this endeavor are reviewed, as well as the obstacles faced by these countries when disinvesting and their mid-term perspectives. Ignorance does not excuse its application, regardless of whether there is a crisis or not. Efforts to improve social efficiency are a permanent obligation of the national health system. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.
Psychiatric education in the correctional setting: challenges and opportunities.
Holoyda, Brian J; Scott, Charles L
2017-02-01
As the need for mental healthcare services within correctional settings in the US increases, so does the need for a mental health workforce that is motivated to work within such systems. One potentially effective method by which to increase the number of psychiatrists working in jails, prisons, and parole clinics is to provide exposure to these environments during their training. Correctional settings can serve as unique training sites for medical students and psychiatric residents and fellows. Such training experiences can provide a host of benefits to both trainees and staff within the correctional mental health system. Alongside many potential benefits exist substantial potential barriers to coordinating correctional training experiences, including both programme directors' and residents' concerns regarding safety and enjoyment and negative perceptions of inmate and prisoner patients. The establishment of academic affiliations with correctional institutions and didactic instruction on commonly encountered clinical issues with inmate populations may be methods of diffusing these concerns. Improving residents' and fellows' training experiences offers a hope for increasing the attractiveness of a career in correctional psychiatry.
How does delivery method influence factors that contribute to women's childbirth experiences?
Carquillat, Pierre; Boulvain, Michel; Guittier, Marie-Julia
2016-12-01
whether delivery method influences factors contributing to women's childbirth experience remains debated. we compared subjective childbirth experience according to different delivery methods. this study used a cross-sectional design. the setting comprised two university hospitals: one in Geneva, Switzerland and one in Clermont-Ferrand, France. a total of 291 primiparous women were recruited from July 2014 to January 2015 during their stay in the maternity wards. The mean age of the participants was 30.8 (SD=4.7) years, and most were Swiss or European (86%). the 'Questionnaire for Assessing Childbirth Experience' was sent between four and six weeks after delivery. Clinimetric and psychometric approaches were used to assess childbirth experience according to delivery method. the mean scores of the four questionnaire dimensions varied significantly by delivery method. 'First moments with the newborn' was more negatively experienced by women from the caesarean section group compared to those who delivered vaginally (p<0.001). Similar results regarding the dimension of 'emotional status' were also observed, as women who delivered by caesarean section felt more worried, less secure, and less confident (p=0.001). 'Relationship with staff' significantly differed between groups (p=0.047) as more negative results were shown in the 'unexpected medical intervention groups' (i.e. emergency caesarean section and instrumental delivered vaginally). Women's 'feelings at one-month post partum' in the emergency caesarean section group were less satisfactory than the other groups. Delivery method and other obstetric variables explained only a low proportion of the total variance in the global scores (R 2 adjusted=0.18), which emphasized the importance of subjective factors in women's childbirth experience. a comparison of best expected positive responses to each item (clinimetric approach) showed useful results for clinicians. This research indicated that delivery method influenced key factors (psychometric approach) of the childbirth experience. delivery method should not be considered alone and health professionals should focus on what is important for women to foster a more positive experience. In addition, women who have had an emergency caesarean section require special attention during post partum. Copyright © 2016 Elsevier Ltd. All rights reserved.
Experiences of immigrant women who self-petition under the Violence Against Women Act.
Ingram, Maia; McClelland, Deborah Jean; Martin, Jessica; Caballero, Montserrat F; Mayorga, Maria Theresa; Gillespie, Katie
2010-08-01
Undocumented immigrant women who are abused and living in the United States are isolated in a foreign country, in constant fear of deportation, and feel at the mercy of their spouse to gain legal status. To ensure that immigration law does not trap women in abusive relationships, the Violence Against Women Act (VAWA, 1994) enabled immigrant women to self-petition for legal status. Qualitative research methods were used in this participatory action research to investigate the experiences of Mexican immigrant women filing VAWA self-petitions. Emotional, financial, and logistic barriers in applying are identified, and recommendations for practice research and policy are provided.
The Patient Experience With Shared Decision Making: A Qualitative Descriptive Study.
Truglio-Londrigan, Marie
2015-01-01
Shared decision making is a process characterized by a partnership between a nurse and a patient. The existence of a relationship does not ensure shared decision making. Little is known about what nurses need to know and do for this experience to take place. A qualitative descriptive study was implemented using Coalizzi's method. Semistructured interviews were held with patients, and 3 themes were uncovered. The findings suggest that a nurse's conduct aimed at drawing patients in and inviting them to participate in a conversation leads toward shared decisions. Infusion nurses may find this information useful as they engage their patients in shared decisions.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-01-01
Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.
The method for homography estimation between two planes based on lines and points
NASA Astrophysics Data System (ADS)
Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry
2018-04-01
The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
ERIC Educational Resources Information Center
Crook, Nicola; Adams, Malcolm; Shorten, Nicola; Langdon, Peter E.
2016-01-01
Background: This study investigated whether a personalized life story book and rummage box enhanced well-being and led to changes in behaviour for people with Down syndrome (DS) who have dementia. Materials and Methods: A randomized single case series design was used with five participants who had DS and a diagnosis of dementia. Participants were…
Flight Screening Program Effects on Attrition in Undergraduate Pilot Training
1987-08-01
the final fiveý lesson grades (8-12), suggesting that a UPT screening decision could be made at an earl~er stage of FSP than is the current practice...Does FSP Provide An Opportunity For SIE? ....... .... 6 Training/EAperience Effects of FS?: Does the FSP Give a Training/ Experience Benefit in UPT...effect. FSP Screening: Does FSP Provide an Opportunity for SIE? Some individuals who have had no previous flying experience (other than as passengers) may
Power dissipated measurement of an ultrasonic generator in a viscous medium by flowmetric method.
Mancier, Valérie; Leclercq, Didier
2008-09-01
A new flowmetric method of the power dissipated by an ultrasound generator in an aqueous medium has been developed in previous works and described in a preceding paper [V. Mancier, D. Leclercq, Ultrasonics Sonochemistry 14 (2007) 99-106]. The works presented here are an enlargement of this method to a high viscosity liquid (glycerol) for which the classical calorimetric measurements are rather difficult. As expected, it is shown that the dissipated power increases with the medium viscosity. It was also found that this flowmetric method gives good results for various quantities of liquid and positioning of the sonotrode in the tank. Moreover, the important variation of viscosity due to the heating of the liquid during experiments does not disturb flow measurements.
DoE optimization of a mercury isotope ratio determination method for environmental studies.
Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico
2016-05-15
By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
A novel method for state of charge estimation of lithium-ion batteries using a nonlinear observer
NASA Astrophysics Data System (ADS)
Xia, Bizhong; Chen, Chaoren; Tian, Yong; Sun, Wei; Xu, Zhihui; Zheng, Weiwei
2014-12-01
The state of charge (SOC) is important for the safety and reliability of battery operation since it indicates the remaining capacity of a battery. However, as the internal state of each cell cannot be directly measured, the value of the SOC has to be estimated. In this paper, a novel method for SOC estimation in electric vehicles (EVs) using a nonlinear observer (NLO) is presented. One advantage of this method is that it does not need complicated matrix operations, so the computation cost can be reduced. As a key step in design of the nonlinear observer, the state-space equations based on the equivalent circuit model are derived. The Lyapunov stability theory is employed to prove the convergence of the nonlinear observer. Four experiments are carried out to evaluate the performance of the presented method. The results show that the SOC estimation error converges to 3% within 130 s while the initial SOC error reaches 20%, and does not exceed 4.5% while the measurement suffers both 2.5% voltage noise and 5% current noise. Besides, the presented method has advantages over the extended Kalman filter (EKF) and sliding mode observer (SMO) algorithms in terms of computation cost, estimation accuracy and convergence rate.
Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach
NASA Astrophysics Data System (ADS)
GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan
2018-02-01
Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.
Woods, Amanda M; Bouton, Mark E
2008-12-01
Five experiments with rat subjects compared the effects of immediate and delayed extinction on the durability of extinction learning. Three experiments examined extinction of fear conditioning (using the conditioned emotional response method), and two experiments examined extinction of appetitive conditioning (using the food-cup entry method). In all experiments, conditioning and extinction were accomplished in single sessions, and retention testing took place 24 h after extinction. In both fear and appetitive conditioning, immediate extinction (beginning 10 min after conditioning) caused a faster loss of responding than delayed extinction (beginning 24 h after conditioning). However, immediate extinction was less durable than delayed extinction: There was stronger spontaneous recovery during the final retention test. There was also substantial renewal of responding when the physical context was changed between immediate extinction and testing (Experiment 1). The results suggest that, in these two widely used conditioning preparations, immediate extinction does not erase or depotentiate the original learning, and instead creates a less permanent reduction in conditioned responding. Results did not support the possibility that the strong recovery after immediate extinction was due to a mismatch in the recent "context" provided by the presence or absence of a recent conditioning experience. Several other accounts are considered.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
NASA Astrophysics Data System (ADS)
Chartosias, Marios
Acceptance of Carbon Fiber Reinforced Polymer (CFRP) structures requires a robust surface preparation method with improved process controls capable of ensuring high bond quality. Surface preparation in a production clean room environment prior to applying adhesive for bonding would minimize risk of contamination and reduce cost. Plasma treatment is a robust surface preparation process capable of being applied in a production clean room environment with process parameters that are easily controlled and documented. Repeatable and consistent processing is enabled through the development of a process parameter window utilizing techniques such as Design of Experiments (DOE) tailored to specific adhesive and substrate bonding applications. Insight from respective plasma treatment Original Equipment Manufacturers (OEMs) and screening tests determined critical process factors from non-factors and set the associated factor levels prior to execution of the DOE. Results from mode I Double Cantilever Beam (DCB) testing per ASTM D 5528 [1] standard and DOE statistical analysis software are used to produce a regression model and determine appropriate optimum settings for each factor.
34 CFR 647.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary evaluate prior experience? 647.22 Section 647.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION RONALD E. MCNAIR POSTBACCALAUREATE ACHIEVEMENT...
Practical aspects of running DOE for improving growth media for in vitro plants
USDA-ARS?s Scientific Manuscript database
Experiments using DOE software to improve plant tissue culture growth medium are complicated and require complex setups. Once the experimental design is set and the treatment points calculated, media sheets and mixing charts must be developed. Since these experiments require three passages on the sa...
Power-Stepped HF Cross-Modulation Experiments: Simulations and Experimental Observations
NASA Astrophysics Data System (ADS)
Greene, S.; Moore, R. C.
2014-12-01
High frequency (HF) cross modulation experiments are a well established means for probing the HF-modified characteristics of the D-region ionosphere. The interaction between the heating wave and the probing pulse depends on the ambient and modified conditions of the D-region ionosphere. Cross-modulation observations are employed as a measure of the HF-modified refractive index. We employ an optimized version of Fejer's method that we developed during previous experiments. Experiments were performed in March 2013 at the High Frequency Active Auroral Research Program (HAARP) observatory in Gakona, Alaska. During these experiments, the power of the HF heating signal incrementally increased in order to determine the dependence of cross-modulation on HF power. We found that a simple power law relationship does not hold at high power levels, similar to previous ELF/VLF wave generation experiments. In this paper, we critically compare these experimental observations with the predictions of a numerical ionospheric HF heating model and demonstrate close agreement.
Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K
2014-07-01
Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Debrus, B; Lebrun, P; Kindenge, J Mbinze; Lecomte, F; Ceccato, A; Caliaro, G; Mbay, J Mavar Tayey; Boulanger, B; Marini, R D; Rozet, E; Hubert, Ph
2011-08-05
An innovative methodology based on design of experiments (DoE), independent component analysis (ICA) and design space (DS) was developed in previous works and was tested out with a mixture of 19 antimalarial drugs. This global LC method development methodology (i.e. DoE-ICA-DS) was used to optimize the separation of 19 antimalarial drugs to obtain a screening method. DoE-ICA-DS methodology is fully compliant with the current trend of quality by design. DoE was used to define the set of experiments to model the retention times at the beginning, the apex and the end of each peak. Furthermore, ICA was used to numerically separate coeluting peaks and estimate their unbiased retention times. Gradient time, temperature and pH were selected as the factors of a full factorial design. These retention times were modelled by stepwise multiple linear regressions. A recently introduced critical quality attribute, namely the separation criterion (S), was also used to assess the quality of separations rather than using the resolution. Furthermore, the resulting mathematical models were also studied from a chromatographic point of view to understand and investigate the chromatographic behaviour of each compound. Good adequacies were found between the mathematical models and the expected chromatographic behaviours predicted by chromatographic theory. Finally, focusing at quality risk management, the DS was computed as the multidimensional subspace where the probability for the separation criterion to lie in acceptance limits was higher than a defined quality level. The DS was computed propagating the prediction error from the modelled responses to the quality criterion using Monte Carlo simulations. DoE-ICA-DS allowed encountering optimal operating conditions to obtain a robust screening method for the 19 considered antimalarial drugs in the framework of the fight against counterfeit medicines. Moreover and only on the basis of the same data set, a dedicated method for the determination of three antimalarial compounds in a pharmaceutical formulation was optimized to demonstrate both the efficiency and flexibility of the methodology proposed in the present study. Copyright © 2011 Elsevier B.V. All rights reserved.
Does sadness impair color perception? Flawed evidence and faulty methods.
Holcombe, Alex O; Brown, Nicholas J L; Goodbourn, Patrick T; Etz, Alexander; Geukes, Sebastian
2016-01-01
In their 2015 paper, Thorstenson, Pazda, and Elliot offered evidence from two experiments that perception of colors on the blue-yellow axis was impaired if the participants had watched a sad movie clip, compared to participants who watched clips designed to induce a happy or neutral mood. Subsequently, these authors retracted their article, citing a mistake in their statistical analyses and a problem with the data in one of their experiments. Here, we discuss a number of other methodological problems with Thorstenson et al.'s experimental design, and also demonstrate that the problems with the data go beyond what these authors reported. We conclude that repeating one of the two experiments, with the minor revisions proposed by Thorstenson et al., will not be sufficient to address the problems with this work.
Pilpel, Avital
2007-09-01
This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher's criticism of Mendel's experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher's paper from the point of view of rational belief change theory: what changes of belief about Mendel's experiment does Fisher go through and with what justification. It leads to surprising insights about what Fisher had done right and wrong, and, more generally, about the limits of statistical methods in detecting error.
Research-based recommendations for implementing international service-learning.
Amerson, Roxanne
2014-01-01
An increasing number of schools of nursing are incorporating international service-learning and/or immersion experiences into their curriculum to promote cultural competence. The purpose of this paper is to identify research-based recommendations for implementing an international service-learning program. A review of literature was conducted in the Cumulative Index of Nursing and Allied Health Literature database using the keywords international, immersion, cultural competence, nursing, and international service-learning. Additional references were located from the reference lists of related articles. Planning of international or immersion experiences requires consideration of the type of country, the length of time, and design of the program; the use of a service-learning framework; opportunities that require the student to live and work in the community, provide hands-on care, participate in unstructured activities, and make home visits; and a method of reflection. Increasing cultural competence does not require foreign travel, but it does necessitate that students are challenged to move outside their comfort zone and work directly with diverse populations. These research-based recommendations may be used either internationally or locally to promote the most effective service-learning opportunities for nursing students. © 2014.
Project 57 Air Monitoring Report: January 1 through December 31, 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizell, Steve A; Nikolich, George; Shadel, Craig
2017-01-01
On April 24, 1957, the Atomic Energy Commission (AEC, now the Department of Energy [DOE]) conducted the Project 57 safety experiment in western Emigrant Valley north east of the Nevada National Security Site (NNSS, formerly the Nevada Test Site) on lands withdrawn by the Department of Defense (DOD) for the Nevada Test and Training Range (NTTR). The test was undertaken to develop (1) a means of estimating plutonium distribution resulting from a non-nuclear detonation; (2) biomedical evaluation techniques for use in plutonium-laden environments; (3) methods of surface decontamination; and (4) instruments and field procedures for prompt estimation of alpha contaminationmore » (Shreve, 1958). Although the test did not result in the fission of nuclear materials, it did disseminate plutonium across the land surface. Following the experiment, the AEC fenced the contaminated area and returned control of the surrounding land to the DOD. Various radiological surveys were performed in the area and in 2007, the DOE expanded the demarked Contamination Area by posting signs 200 to 400 feet (60 to 120 meters) outside of the original fence.« less
Martinello, Tiago; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Taqueda, Maria Elena Santos; Consiglieri, Vladi O
2006-09-28
The poor flowability and bad compressibility characteristics of paracetamol are well known. As a result, the production of paracetamol tablets is almost exclusively by wet granulation, a disadvantageous method when compared to direct compression. The development of a new tablet formulation is still based on a large number of experiments and often relies merely on the experience of the analyst. The purpose of this study was to apply experimental design methodology (DOE) to the development and optimization of tablet formulations containing high amounts of paracetamol (more than 70%) and manufactured by direct compression. Nineteen formulations, screened by DOE methodology, were produced with different proportions of Microcel 102, Kollydon VA 64, Flowlac, Kollydon CL 30, PEG 4000, Aerosil, and magnesium stearate. Tablet properties, except friability, were in accordance with the USP 28th ed. requirements. These results were used to generate plots for optimization, mainly for friability. The physical-chemical data found from the optimized formulation were very close to those from the regression analysis, demonstrating that the mixture project is a great tool for the research and development of new formulations.
Project 57 Air Monitoring Report: January 1 through December 31, 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizell, Steve A.; Nikolich, George; Shadel, Craig
On April 24, 1957, the Atomic Energy Commission (AEC, now the Department of Energy [DOE]) conducted the Project 57 safety experiment in western Emigrant Valley northeast of the Nevada National Security Site (NNSS, formerly the Nevada Test Site) on lands withdrawn by the Department of Defense (DOD) for the Nevada Test and Training Range (NTTR). The test was undertaken to develop (1) a means of estimating plutonium distribution resulting from a non-nuclear detonation; (2) biomedical evaluation techniques for use in plutonium-laden environments; (3) methods of surface decontamination; and (4) instruments and field procedures for prompt estimation of alpha contamination (Shreve,more » 1958). Although the test did not result in the fission of nuclear materials, it did disseminate plutonium across the land surface. Following the experiment, the AEC fenced the contaminated area and returned control of the surrounding land to the DOD. Various radiological surveys were performed in the area and in 2007, the DOE expanded the demarked Contamination Area (CA) by posting signs 200 to 400 ft (60 to 120 m) outside of the original fence.« less
Salmani, M H; Mokhtari, M; Raeisi, Z; Ehrampoush, M H; Sadeghian, H A
2017-09-01
Wastewater containing pharmaceutical residual components must be treated before being discharged to the environment. This study was conducted to investigate the efficiency of tungsten-carbon nanocomposite in diclofenac removal using design of experiment (DOE). The 27 batch adsorption experiments were done by choosing three effective parameters (pH, adsorbent dose, and initial concentration) at three levels. The nanocomposite was prepared by tungsten oxide and activated carbon powder in a ratio of 1 to 4 mass. The remaining concentration of diclofenac was measured by a spectrometer with adding reagents of 2, 2'-bipyridine, and ferric chloride. Analysis of variance (ANOVA) was applied to determine the main and interaction effects. The equilibrium time for removal process was determined as 30 min. It was observed that the pH had the lowest influence on the removal efficiency of diclofenac. Nanocomposite gave a high removal at low concentration of 5.0 mg/L. The maximum removal for an initial concentration of 5.0 mg/L was 88.0% at contact time of 30 min. The results of ANOVA showed that adsorbent mass was among the most effective variables. Using DOE as an efficient method revealed that tungsten-carbon nanocomposite has high efficiency in the removal of residual diclofenac from the aqueous solution.
Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru
2018-04-01
Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
Fast frequency domain method to detect skew in a document image
NASA Astrophysics Data System (ADS)
Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee
2015-12-01
In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.
Papaemmanouil, Christina; Tsiafoulis, Constantinos G; Alivertis, Dimitrios; Tzamaloukas, Ouranios; Miltiadou, Despoina; Tzakos, Andreas G; Gerothanassis, Ioannis P
2015-06-10
We report a rapid, direct, and unequivocal spin-chromatographic separation and identification of minor components in the lipid fraction of milk and common dairy products with the use of selective one-dimensional (1D) total correlation spectroscopy (TOCSY) nuclear magnetic resonance (NMR) experiments. The method allows for the complete backbone spin-coupling network to be elucidated even in strongly overlapped regions and in the presence of major components from 4 × 10(2) to 3 × 10(3) stronger NMR signal intensities. The proposed spin-chromatography method does not require any derivatization steps for the lipid fraction, is selective with excellent resolution, is sensitive with quantitation capability, and compares favorably to two-dimensional (2D) TOCSY and gas chromatography-mass spectrometry (GC-MS) methods of analysis. The results of the present study demonstrated that the 1D TOCSY NMR spin-chromatography method can become a procedure of primary interest in food analysis and generally in complex mixture analysis.
Does Economics Education Make Bad Citizens? The Effect of Economics Education in Japan
ERIC Educational Resources Information Center
Iida, Yoshio; Oda, Sobei H.
2011-01-01
Does studying economics discourage students' cooperative mind? Several surveys conducted in the United States have concluded that the answer is yes. The authors conducted a series of economic experiments and questionnaires to consider the question in Japan. The results of the prisoner's dilemma experiment and public goods questionnaires showed no…
Evidence for Two Attentional Components in Visual Working Memory
ERIC Educational Resources Information Center
Allen, Richard J.; Baddeley, Alan D.; Hitch, Graham J.
2014-01-01
How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farbin, Amir
2015-07-15
This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".
NASA Astrophysics Data System (ADS)
Roth, Wolff-Michael
2015-06-01
For many students, the experience with science tends to be alienating and uprooting. In this study, I take up Simone Weil's concepts of enracinement (rooting) and déracinement (uprooting) to theorize the root of this alienation, the confrontation between children's familiarity with the world and unfamiliar/strange scientific conceptions. I build on the works of the phenomenological philosopher Edmund Husserl and the German physics educator Martin Wagenschein (who directly refers to Weil's concepts) to make a case for the rooting function of original/originary experiences and the genetic method to science teaching. The genetic approach allows students to retain their foundational familiarity with the world and their descriptions thereof all the while evolving other (more scientific) ways of explaining natural phenomena.
Principles of disaster management. Lesson 7: Management leadership styles and methods.
Cuny, F C
2000-01-01
This lesson explores the use of different management leadership styles and methods that are applied to disaster management situations. Leadership and command are differentiated. Mechanisms that can be used to influence others developed include: 1) coercion; 2) reward; 3) position; 4) knowledge; and 5) admiration. Factors that affect leadership include: 1) individual characteristics; 2) competence; 3) experience; 4) self-confidence; 5) judgment; 6) decision-making; and 8) style. Experience and understanding the task are important factors for leadership. Four styles of leadership are developed: 1) directive; 2) supportive; 3) participative; and 4) achievement oriented. Application of each of these styles is discussed. The styles are discussed further as they relate to the various stages of a disaster. The effects of interpersonal relationships and the effects of the environment are stressed. Lastly, leadership does not just happen because a person is appointed as a manager--it must be earned.
Spectrally resolved visualization of fluorescent dyes permeating into skin
NASA Astrophysics Data System (ADS)
Maeder, Ulf; Bergmann, Thorsten; Beer, Sebastian; Burg, Jan Michael; Schmidts, Thomas; Runkel, Frank; Fiebich, Martin
2012-03-01
We present a spectrally resolved confocal imaging approach to qualitatively asses the overall uptake and the penetration depth of fluorescent dyes into biological tissue. We use a confocal microscope with a spectral resolution of 5 nm to measure porcine skin tissue after performing a Franz-Diffusion experiment with a submicron emulsion enriched with the fluorescent dye Nile Red. The evaluation uses linear unmixing of the dye and the tissue autofluorescence spectra. The results are combined with a manual segmentation of the skin's epidermis and dermis layers to assess the penetration behavior additionally to the overall uptake. The diffusion experiments, performed for 3h and 24h, show a 3-fold increased dye uptake in the epidermis and dermis for the 24h samples. As the method is based on spectral information it does not face the problem of superimposed dye and tissue spectra and therefore is more precise compared to intensity based evaluation methods.
NASA Astrophysics Data System (ADS)
Hawkins, Cameron; Tschuaner, Oliver; Fussell, Zachary; Smith, Jesse
2017-06-01
A novel approach that spatially identifies inhomogeneities from microscale (defects, con-formational disorder) to mesoscale (voids, inclusions) is developed using synchrotron x-ray methods: tomography, Lang topography, and micro-diffraction mapping. These techniques pro-vide a non-destructive method for characterization of mm-sized samples prior to shock experiments. These characterization maps can be used to correlate continuum level measurements in shock compression experiments to the mesoscale and microscale structure. Specifically examined is a sample of C4. We show extensive conformational disorder in gamma-RDX, which is the main component. Further, we observe that the minor HMX-component in C4 contains at least two different phases: alpha- and beta-HMX. This work supported by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy and by the Site-Directed Research and Development Program. DOE/NV/25946-3071.
When Does Air Resistance Become Significant in Projectile Motion?
NASA Astrophysics Data System (ADS)
Mohazzabi, Pirooz
2018-03-01
In an article in this journal, it was shown that air resistance could never be a significant source of error in typical free-fall experiments in introductory physics laboratories. Since projectile motion is the two-dimensional version of the free-fall experiment and usually follows the former experiment in such laboratories, it seemed natural to extend the same analysis to this type of motion. We shall find that again air resistance does not play a significant role in the parameters of interest in a traditional projectile motion experiment.
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-01-01
Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120
What does an innovative teaching assignment strategy mean to nursing students?
Neuman, Lois H; Pardue, Karen T; Grady, Janet L; Gray, Mary Tod; Hobbins, Bonnie; Edelstein, Jan; Herrman, Judith W
2009-01-01
The concept of innovation in nursing education has been addressed in published literature on faculty-defined and faculty-created teaching strategies and instructional methods. In this project, innovation is defined as "using knowledge to create ways and services that are new (or perceived as new) in order to transform systems" (Pardue, Tagliareni, Valiga, Davison-Price, & Orchowsky, 2005). Studies on nursing student perceptions of innovation are limited, and it is unclear how undergraduate and graduate students conceptualize innovative learning experiences. This project explored students' perceptions of their experiences with instructor-defined, innovative teaching/learning strategies in four types of nursing education programs. Issues nurse educators should consider as they apply new techniques to their teaching are discussed.
Progress toward a new measurement of the neutron lifetime
NASA Astrophysics Data System (ADS)
Grammer, Kyle
2015-10-01
Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements, recent apparatus tests, and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.
Progress toward a new measurement of the neutron lifetime
NASA Astrophysics Data System (ADS)
Grammer, Kyle
2015-04-01
Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.
Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle
NASA Technical Reports Server (NTRS)
Peret, L.; Legendre, P.; Delavault, S.; Martin, T.
2007-01-01
In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.
NASA Astrophysics Data System (ADS)
Wang, Huijun; Qu, Zheng; Tang, Shaofei; Pang, Mingqi; Zhang, Mingju
2017-08-01
In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.
Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2010-01-01
How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Hartl, Barbara; Hofmann, Eva; Gangl, Katharina; Hartner-Tiefenthaler, Martina; Kirchler, Erich
2015-01-01
Following the classic economic model of tax evasion, taxpayers base their tax decisions on economic determinants, like fine rate and audit probability. Empirical findings on the relationship between economic key determinants and tax evasion are inconsistent and suggest that taxpayers may rather rely on their beliefs about tax authority’s power. Descriptions of the tax authority’s power may affect taxpayers’ beliefs and as such tax evasion. Experiment 1 investigates the impact of fines and beliefs regarding tax authority’s power on tax evasion. Experiments 2-4 are conducted to examine the effect of varying descriptions about a tax authority’s power on participants’ beliefs and respective tax evasion. It is investigated whether tax evasion is influenced by the description of an authority wielding coercive power (Experiment 2), legitimate power (Experiment 3), and coercive and legitimate power combined (Experiment 4). Further, it is examined whether a contrast of the description of power (low to high power; high to low power) impacts tax evasion (Experiments 2-4). Results show that the amount of fine does not impact tax payments, whereas participants’ beliefs regarding tax authority’s power significantly shape compliance decisions. Descriptions of high coercive power as well as high legitimate power affect beliefs about tax authority’s power and positively impact tax honesty. This effect still holds if both qualities of power are applied simultaneously. The contrast of descriptions has little impact on tax evasion. The current study indicates that descriptions of the tax authority, e.g., in information brochures and media reports, have more influence on beliefs and tax payments than information on fine rates. Methodically, these considerations become particularly important when descriptions or vignettes are used besides objective information. PMID:25923770
Hartl, Barbara; Hofmann, Eva; Gangl, Katharina; Hartner-Tiefenthaler, Martina; Kirchler, Erich
2015-01-01
Following the classic economic model of tax evasion, taxpayers base their tax decisions on economic determinants, like fine rate and audit probability. Empirical findings on the relationship between economic key determinants and tax evasion are inconsistent and suggest that taxpayers may rather rely on their beliefs about tax authority's power. Descriptions of the tax authority's power may affect taxpayers' beliefs and as such tax evasion. Experiment 1 investigates the impact of fines and beliefs regarding tax authority's power on tax evasion. Experiments 2-4 are conducted to examine the effect of varying descriptions about a tax authority's power on participants' beliefs and respective tax evasion. It is investigated whether tax evasion is influenced by the description of an authority wielding coercive power (Experiment 2), legitimate power (Experiment 3), and coercive and legitimate power combined (Experiment 4). Further, it is examined whether a contrast of the description of power (low to high power; high to low power) impacts tax evasion (Experiments 2-4). Results show that the amount of fine does not impact tax payments, whereas participants' beliefs regarding tax authority's power significantly shape compliance decisions. Descriptions of high coercive power as well as high legitimate power affect beliefs about tax authority's power and positively impact tax honesty. This effect still holds if both qualities of power are applied simultaneously. The contrast of descriptions has little impact on tax evasion. The current study indicates that descriptions of the tax authority, e.g., in information brochures and media reports, have more influence on beliefs and tax payments than information on fine rates. Methodically, these considerations become particularly important when descriptions or vignettes are used besides objective information.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
What Difference Does a More In-Depth Programme Make to Learning?
ERIC Educational Resources Information Center
Reiss, Athene
2015-01-01
It is virtually axiomatic that a more extended learning experience will have more impact than a one-off experience. But how much difference does it make and is the extended time commitment justified by the results? The Berkshire, Buckinghamshire and Oxfordshire Wildlife Trust (BBOWT) conducted some research to explore this question with regard to…
ERIC Educational Resources Information Center
González-Víllora, Sixto; Serra-Olivares, Jaime; González-Martí, Irene; Hernández-Martínez, Andrea
2012-01-01
People construct knowledge through a set of highly diverse experiences. Despite being personal, this knowledge is strongly influenced by the specific context where it occurs. Such experience-based knowledge is referred to as "implicit theories" because it does not fit in with a systematic and theoretical knowledge context like that of…
Kuten, A; Stein, M; Mandelzweig, Y; Tatcher, M; Yaacov, G; Epelbaum, R; Rosenblatt, E
1991-07-01
Total-skin electron irradiation (TSEI) is effective and frequently used in the treatment of cutaneous T-cell lymphoma. A treatment technique has been developed at our center, using the Philips SL 75/10 linear accelerator. In our method, the patient is irradiated in a recumbent position by five pairs of uncollimated electron beams at a source to skin distance of 150 cm. This method provides a practical solution to clinical requirements with respect to uniformity of electron dose and low X-ray contamination. Its implementation does not require special equipment or modification of the linear accelerator, 19 of 23 patients (83%) with mycosis fungoides, treated by this method, achieved complete regression of their cutaneous lesions.
A speciation solver for cement paste modeling and the semismooth Newton method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georget, Fabien, E-mail: fabieng@princeton.edu; Prévost, Jean H., E-mail: prevost@princeton.edu; Vanderbei, Robert J., E-mail: rvdb@princeton.edu
2015-02-15
The mineral assemblage of a cement paste may vary considerably with its environment. In addition, the water content of a cement paste is relatively low and the ionic strength of the interstitial solution is often high. These conditions are extreme conditions with respect to the common assumptions made in speciation problem. Furthermore the common trial and error algorithm to find the phase assemblage does not provide any guarantee of convergence. We propose a speciation solver based on a semismooth Newton method adapted to the thermodynamic modeling of cement paste. The strong theoretical properties associated with these methods offer practical advantages.more » Results of numerical experiments indicate that the algorithm is reliable, robust, and efficient.« less
Literacy and retention of information after a multimedia diabetes education program and teach-back.
Kandula, Namratha R; Malli, Tiffany; Zei, Charles P; Larsen, Emily; Baker, David W
2011-01-01
Few studies have examined the effectiveness of teaching strategies to improve patients' recall and retention of information. As a next step in implementing a literacy-appropriate, multimedia diabetes education program (MDEP), the present study reports the results of two experiments designed to answer (a) how much knowledge is retained 2 weeks after viewing the MDEP, (b) does knowledge retention differ across literacy levels, and (c) does adding a teach-back protocol after the MDEP improve knowledge retention at 2-weeks' follow-up? In Experiment 1, adult primary care patients (n = 113) watched the MDEP and answered knowledge-based questions about diabetes before and after viewing the MDEP. Two weeks later, participants completed the knowledge assessment a third time. Methods and procedures for Experiment 2 (n = 58) were exactly the same, except that if participants answered a question incorrectly after watching the MDEP, they received teach-back, wherein the information was reviewed and the question was asked again, up to two times. Two weeks later, Experiment 2 participants completed the knowledge assessment again. Literacy was measured using the S-TOFHLA. After 2 weeks, all participants, regardless of their literacy levels, forgot approximately half the new information they had learned from the MDEP. In regression models, adding a teach-back protocol did not improve knowledge retention among participants and literacy was not associated with knowledge retention at 2 weeks. Health education interventions must incorporate strategies that can improve retention of health information and actively engage patients in long-term learning.
Salloch, Sabine; Otte, Ina; Reinacher-Schick, Anke; Vollmann, Jochen
2018-02-01
Physicians' clinical expertise forms an exclusive body of competences, which helps them to find the appropriate diagnostics and treatment for each individual patient. Empirical evidence, however, suggests that there is an inverse relationship between the number of years in practice and the quality of care provided by a physician. Knowledge and adherence to professional standards (such as clinical guidelines) are often used as indicators in previous research. Semistructured interviews and the Q method were used for an explorative study on oncologists' views on the interplay between their own clinical expertise, intuition, and the external evidence incorporated in clinical guidelines. The interviews were audio recorded, transcribed ad verbatim, and analysed using qualitative content analysis. Data analysis shows the complex character of clinical expertise with respect to experience, professional development, and intuition. An irreplaceable role is attributed to personal and bodily experience during the providing of care for a patient. Professional experience becomes important, particularly in those situations that lie out of the focus of "guideline medicine." Intuition is regarded as having a strong emotional component and helps for deciding which therapeutic option the patient can deal with. Using measurable knowledge and adherence to standards as indicators does not account for the complexity of clinical expertise. Other factors, such as the importance of bodily experience and physicians' intuitive knowledge, must be considered, also with respect to the occurrence of treatment biases. © 2017 John Wiley & Sons, Ltd.
[Bases and methods of suturing].
Vogt, P M; Altintas, M A; Radtke, C; Meyer-Marcotty, M
2009-05-01
If pharmaceutic modulation of scar formation does not improve the quality of the healing process over conventional healing, the surgeon must rely on personal skill and experience. Therefore a profound knowledge of wound healing based on experimental and clinical studies supplemented by postsurgical means of scar management and basic techniques of planning incisions, careful tissue handling, and thorough knowledge of suturing remain the most important ways to avoid abnormal scarring. This review summarizes the current experimental and clinical bases of surgical scar management.
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
Baevskiĭ, R M; Bogomolov, V V; Funtova, I I; Slepchenkova, I N; Chernikova, A G
2009-01-01
Methods of investigating the physiological functions in space crews on extended missions during night sleep are of much fundamental and practical substance. The design of experiment "Sonocard" utilizes the method of seismocardiography. Purpose of the experiment is to validate the procedures of noncontact in-sleep physiological data recoding which are potent to enhance the space crew medical operations system. The experiment was performed systematically by ISS Russian crew members starting from mission-16. The experimental procedure is easy and does not cause discomfort to human subjects. Results of the initial experimental sessions demonstrated that, as on Earth, sleep in microgravity is crucial for the recovery of body functional reserves and that the innovative technology is instrumental in studying the recovery processes as well as person unique patterns of adaptation to extended space mission. It also allows conclusions about sleep quality, mechanisms of recreation, and body functionality. These data may enrich substantially the information used by medical operators of the space missions control centers.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Robust and Imperceptible Watermarking of Video Streams for Low Power Devices
NASA Astrophysics Data System (ADS)
Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.
With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.
Deblurring in digital tomosynthesis by iterative self-layer subtraction
NASA Astrophysics Data System (ADS)
Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung
2010-04-01
Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.
Organic chemical degradation by remote study of the redox conditions
NASA Astrophysics Data System (ADS)
Fernandez, P. M.; Revil, A.; Binley, A. M.; Bloem, E.; French, H. K.
2014-12-01
Monitoring the natural (and enhanced) degradation of organic contaminants is essential for managing groundwater quality in many parts of the world. Contaminated sites often have limited access, hence non-intrusive methods for studying redox processes, which drive the degradation of organic compounds, are required. One example is the degradation of de-icing chemicals (glycols and organic salts) released to the soil near airport runways during winter. This issue has been broadly studied at Oslo airport, Gardermoen, Norway using intrusive and non-intrusive methods. Here, we report on laboratory experiments that aim to study the potential of using a self-potential, DCresistivity, and time-domain induced polarization for geochemical characterization of the degradation of Propylene Glycol (PG). PG is completely miscible in water, does not adsorb to soil particles and does not contribute to the electrical conductivity of the soil water. When the contaminant is in the unsaturated zone near the water table, the oxygen is quickly consumed and the gas exchange with the surface is insufficient to ensure aerobic degradation, which is faster than anaerobic degradation. Since biodegradation of PG is highly oxygen demanding, anaerobic pockets can exist causing iron and manganese reduction. It is hypothesised that nitrate would boost the degradation rate under such conditions. In our experiment, we study PG degradation in a sand tank. We provide the system with an electron highway to bridge zones with different redox potential. This geo-battery system is characterized by self-potential, resistivity and induced polarization anomalies. An example of preliminary results with self-potential at two different times of the experiment can be seen in the illustration. These will be supplemented with more direct information on the redox chemistry: in-situ water sampling, pH, redox potential and electrical conductivity measurements. In parallel, a series of batch experiments have been performed to study anoxic microbial degradation using gas and resistivity measurements.
Development of a sonar-based object recognition system
NASA Astrophysics Data System (ADS)
Ecemis, Mustafa Ihsan
2001-02-01
Sonars are used extensively in mobile robotics for obstacle detection, ranging and avoidance. However, these range-finding applications do not exploit the full range of information carried in sonar echoes. In addition, mobile robots need robust object recognition systems. Therefore, a simple and robust object recognition system using ultrasonic sensors may have a wide range of applications in robotics. This dissertation develops and analyzes an object recognition system that uses ultrasonic sensors of the type commonly found on mobile robots. Three principal experiments are used to test the sonar recognition system: object recognition at various distances, object recognition during unconstrained motion, and softness discrimination. The hardware setup, consisting of an inexpensive Polaroid sonar and a data acquisition board, is described first. The software for ultrasound signal generation, echo detection, data collection, and data processing is then presented. Next, the dissertation describes two methods to extract information from the echoes, one in the frequency domain and the other in the time domain. The system uses the fuzzy ARTMAP neural network to recognize objects on the basis of the information content of their echoes. In order to demonstrate that the performance of the system does not depend on the specific classification method being used, the K- Nearest Neighbors (KNN) Algorithm is also implemented. KNN yields a test accuracy similar to fuzzy ARTMAP in all experiments. Finally, the dissertation describes a method for extracting features from the envelope function in order to reduce the dimension of the input vector used by the classifiers. Decreasing the size of the input vectors reduces the memory requirements of the system and makes it run faster. It is shown that this method does not affect the performance of the system dramatically and is more appropriate for some tasks. The results of these experiments demonstrate that sonar can be used to develop a low-cost, low-computation system for real-time object recognition tasks on mobile robots. This system differs from all previous approaches in that it is relatively simple, robust, fast, and inexpensive.
Baldelli, Sara; Marrubini, Giorgio; Cattaneo, Dario; Clementi, Emilio; Cerea, Matteo
2017-10-01
The application of Quality by Design (QbD) principles in clinical laboratories can help to develop an analytical method through a systematic approach, providing a significant advance over the traditional heuristic and empirical methodology. In this work, we applied for the first time the QbD concept in the development of a method for drug quantification in human plasma using elvitegravir as the test molecule. The goal of the study was to develop a fast and inexpensive quantification method, with precision and accuracy as requested by the European Medicines Agency guidelines on bioanalytical method validation. The method was divided into operative units, and for each unit critical variables affecting the results were identified. A risk analysis was performed to select critical process parameters that should be introduced in the design of experiments (DoEs). Different DoEs were used depending on the phase of advancement of the study. Protein precipitation and high-performance liquid chromatography-tandem mass spectrometry were selected as the techniques to be investigated. For every operative unit (sample preparation, chromatographic conditions, and detector settings), a model based on factors affecting the responses was developed and optimized. The obtained method was validated and clinically applied with success. To the best of our knowledge, this is the first investigation thoroughly addressing the application of QbD to the analysis of a drug in a biological matrix applied in a clinical laboratory. The extensive optimization process generated a robust method compliant with its intended use. The performance of the method is continuously monitored using control charts.
Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph
2015-05-22
When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.
New Developments in the Technology Readiness Assessment Process in US DOE-EM - 13247
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krahn, Steven; Sutter, Herbert; Johnson, Hoyt
2013-07-01
A Technology Readiness Assessment (TRA) is a systematic, metric-based process and accompanying report that evaluates the maturity of the technologies used in systems; it is designed to measure technology maturity using the Technology Readiness Level (TRL) scale pioneered by the National Aeronautics and Space Administration (NASA) in the 1980's. More recently, DoD has adopted and provided systematic guidance for performing TRAs and determining TRLs. In 2007 the GAO recommended that the DOE adopt the NASA/DoD methodology for evaluating technology maturity. Earlier, in 2006-2007, DOE-EM had conducted pilot TRAs on a number of projects at Hanford and Savannah River. In Marchmore » 2008, DOE-EM issued a process guide, which established TRAs as an integral part of DOE-EM's Project Management Critical Decision Process. Since the development of its detailed TRA guidance in 2008, DOE-EM has continued to accumulate experience in the conduct of TRAs and the process for evaluating technology maturity. DOE has developed guidance on TRAs applicable department-wide. DOE-EM's experience with the TRA process, the evaluations that led to recently developed proposed revisions to the DOE-EM TRA/TMP Guide; the content of the proposed changes that incorporate the above lessons learned and insights are described. (authors)« less
Terminologies for text-mining; an experiment in the lipoprotein metabolism domain
Alexopoulou, Dimitra; Wächter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael
2008-01-01
Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175
Four-Dimensional Data Assimilation Using the Adjoint Method
NASA Astrophysics Data System (ADS)
Bao, Jian-Wen
The calculus of variations is used to confirm that variational four-dimensional data assimilation (FDDA) using the adjoint method can be implemented when the numerical model equations have a finite number of first-order discontinuous points. These points represent the on/off switches associated with physical processes, for which the Jacobian matrix of the model equation does not exist. Numerical evidence suggests that, in some situations when the adjoint method is used for FDDA, the temperature field retrieved using horizontal wind data is numerically not unique. A physical interpretation of this type of non-uniqueness of the retrieval is proposed in terms of energetics. The adjoint equations of a numerical model can also be used for model-parameter estimation. A general computational procedure is developed to determine the size and distribution of any internal model parameter. The procedure is then applied to a one-dimensional shallow -fluid model in the context of analysis-nudging FDDA: the weighting coefficients used by the Newtonian nudging technique are determined. The sensitivity of these nudging coefficients to the optimal objectives and constraints is investigated. Experiments of FDDA using the adjoint method are conducted using the dry version of the hydrostatic Penn State/NCAR mesoscale model (MM4) and its adjoint. The minimization procedure converges and the initialization experiment is successful. Temperature-retrieval experiments involving an assimilation of the horizontal wind are also carried out using the adjoint of MM4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehrmann, Henning; Perdue, Robert
2012-07-01
Cementation of radioactive waste is a common technology. The waste is mixed with cement and water and forms a stable, solid block. The physical properties like compression strength or low leach ability depends strongly on the cement recipe. Due to the fact that this waste cement mixture has to fulfill special requirements, a recipe development is necessary. The Six Sigma{sup TM}' DMAIC methodology, together with the Design of experiment (DoE) approach, was employed to optimize the process of a recipe development for cementation at the Ling Ao nuclear power plant (NPP) in China. The DMAIC offers a structured, systematical andmore » traceable process to derive test parameters. The DoE test plans and statistical analysis is efficient regarding the amount of test runs and the benefit gain by getting a transfer function. A transfer function enables simulation which is useful to optimize the later process and being responsive to changes. The DoE method was successfully applied for developing a cementation recipe for both evaporator concentrate and resin waste in the plant. The key input parameters were determined, evaluated and the control of these parameters were included into the design. The applied Six Sigma{sup TM} tools can help to organize the thinking during the engineering process. Data are organized and clearly presented. Various variables can be limited to the most important ones. The Six Sigma{sup TM} tools help to make the thinking and decision process trace able. The tools can help to make data driven decisions (e.g. C and E Matrix). But the tools are not the only golden way. Results from scoring tools like the C and E Matrix need close review before using them. The DoE is an effective tool for generating test plans. DoE can be used with a small number of tests runs, but gives a valuable result from an engineering perspective in terms of a transfer function. The DoE prediction results, however, are only valid in the tested area. So a careful selection of input parameter and their limits for setting up a DoE is very important. An extrapolation of results is not recommended because the results are not reliable out of the tested area. (authors)« less
Park, Jinhee; Yun, Chul; Kang, Seungcheol
2016-01-01
Background Consensus on whether physical condition affects the risk of gravity-induced loss of consciousness (G-LOC) has not been reached, and most previous studies about the issue did not include well-experienced aviators. We compared the physical conditions of well-experienced young aviators according to the occurrence of G-LOC during human centrifuge training. Methods Among 361 young male aviators on active flight duty with experience in high performance aircrafts for at least 2 years, 350 had full data available and were reviewed in this study. We divided the aviators into the G-LOC group and the non-G-LOC group according to their human centrifuge training results. We then compared their basic characteristics, body composition, physical fitness level, and pulmonary function. Results Twenty nine aviators (8.3%) who experienced G-LOC during human centrifuge training in their first trials were classified into the G-LOC group. There was no difference in physical condition of aviators between the two groups. Conclusions Young aviators with experience in G-LOC showed no difference in physical condition such as muscle mass, strength, and general endurance from the aviators with no such experience. Although more studies are needed, physical condition does not seem to be a significant determinant of G-LOC among the experienced aviators. PMID:26812597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, Calvin Mitchell
In May 1973 the University of New Mexico conducted the first nationwide criticality safety training and education week-long short course for nuclear criticality safety engineers. Subsequent to that course, the Los Alamos Critical Experiments Facility (LACEF) developed very successful 'hands-on' subcritical and critical training programs for operators, supervisors, and engineering staff. Since the inception of the US Department of Energy (DOE) Nuclear Criticality Technology and Safety Project (NCT&SP) in 1983, the DOE has stimulated contractor facilities and laboratories to collaborate in the furthering of nuclear criticality as a discipline. That effort included the education and training of nuclear criticality safetymore » engineers (NCSEs). In 1985 a textbook was written that established a path toward formalizing education and training for NCSEs. Though the NCT&SP went through a brief hiatus from 1990 to 1992, other DOE-supported programs were evolving to the benefit of NCSE training and education. In 1993 the DOE established a Nuclear Criticality Safety Program (NCSP) and undertook a comprehensive development effort to expand the extant LACEF 'hands-on' course specifically for the education and training of NCSEs. That successful education and training was interrupted in 2006 for the closing of the LACEF and the accompanying movement of materials and critical experiment machines to the Nevada Test Site. Prior to that closing, the Lawrence Livermore National Laboratory (LLNL) was commissioned by the US DOE NCSP to establish an independent hands-on NCSE subcritical education and training course. The course provided an interim transition for the establishment of a reinvigorated and expanded two-week NCSE education and training program in 2011. The 2011 piloted two-week course was coordinated by the Oak Ridge National Laboratory (ORNL) and jointly conducted by the Los Alamos National Laboratory (LANL) classroom education and facility training, the Sandia National Laboratory (SNL) hands-on criticality experiments training, and the US DOE National Criticality Experiment Research Center (NCERC) hands-on criticality experiments training that is jointly supported by LLNL and LANL and located at the Nevada National Security Site (NNSS) This paper provides the description of the bases, content, and conduct of the piloted, and future US DOE NCSP Criticality Safety Engineer Training and Education Project.« less
Modern methods of cost saving of the production activity in construction
NASA Astrophysics Data System (ADS)
Silka, Dmitriy
2017-10-01
Every time economy faces recession, cost saving questions acquire increased urgency. This article shows how companies of the construction industry have switched to the new kind of economic relations over recent years. It is specified that the dominant type of economic relations does not allow to quickly reorient on the necessary tools in accordance with new requirements of economic activity. Successful experience in the new environment becomes demanded. Cost saving methods, which were proven in other industries, are offered for achievement of efficiency and competitiveness of the companies. Analysis is performed on the example of the retail sphere, which, according to the authoritative analytical reviews, is extremely innovative on both local and world economic levels. At that, methods, based on the modern unprecedentedly high opportunities of communications and informational exchange took special place among offered methods.
Aikawa, T; Horino, S; Ichihara, Y
2015-08-01
Severe damages to natural vegetation, agriculture, and forestry caused by overpopulation of sika deer (Cervus nippon) have markedly increased in Japan in recent years. To devise a population management plan of sika deer, information on the distribution and population size of the animal in each region is indispensable. An easy and effective method to obtain this information is to count the fecal pellets in the field. However, the habitat of sika deer in Japan overlaps that of Japanese serow (Capricornis crispus). Additionally, it is difficult to discriminate between the feces of both animals. Here, we present a rapid and precise diagnostic method for discriminating between the feces of sika deer and Japanese serow using loop-mediated isothermal amplification (LAMP) targeting cytochrome b gene in the mitochondrial DNA. Our results showed that the LAMP can discriminate between the feces of sika deer and Japanese serow, and the method is simpler and more sensitive than the conventional molecular diagnostic method. Since LAMP method does not require special skills for molecular biology techniques, even the field researchers who have never done a molecular experiment can easily carry out the protocol. In addition, the entire protocol, from DNA extraction from fecal pellet to identification of species, takes only about 75 min and does not require expensive equipment. Hence, this diagnostic method is simple, fast, and accessible to anyone. As such, the method can be a useful tool to estimate distribution and population size of sika deer.
Observation of the development of secondary features in a Richtmyer–Meshkov instability driven flow
Bernard, Tennille; Truman, C. Randall; Vorobieff, Peter; ...
2014-09-10
Richtmyer–Meshkov instability (RMI) has long been the subject of interest for analytical, numerical, and experimental studies. In comparing results of experiment with numerics, it is important to understand the limitations of experimental techniques inherent in the chosen method(s) of data acquisition. We discuss results of an experiment where a laminar, gravity-driven column of heavy gas is injected into surrounding light gas and accelerated by a planar shock. A popular and well-studied method of flow visualization (using glycol droplet tracers) does not produce a flow pattern that matches the numerical model of the same conditions, while revealing the primary feature ofmore » the flow developing after shock acceleration: the pair of counter-rotating vortex columns. However, visualization using fluorescent gaseous tracer confirms the presence of features suggested by the numerics; in particular, a central spike formed due to shock focusing in the heavy-gas column. Furthermore, the streamwise growth rate of the spike appears to exhibit the same scaling with Mach number as that of the counter-rotating vortex pair (CRVP).« less
Taking Halo-Independent Dark Matter Methods Out of the Bin
Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew
2014-10-30
We develop a new halo-independent strategy for analyzing emerging DM hints, utilizing the method of extended maximum likelihood. This approach does not require the binning of events, making it uniquely suited to the analysis of emerging DM direct detection hints. It determines a preferred envelope, at a given confidence level, for the DM velocity integral which best fits the data using all available information and can be used even in the case of a single anomalous scattering event. All of the halo-independent information from a direct detection result may then be presented in a single plot, allowing simple comparisons betweenmore » multiple experiments. This results in the halo-independent analogue of the usual mass and cross-section plots found in typical direct detection analyses, where limit curves may be compared with best-fit regions in halo-space. The method is straightforward to implement, using already-established techniques, and its utility is demonstrated through the first unbinned halo-independent comparison of the three anomalous events observed in the CDMS-Si detector with recent limits from the LUX experiment.« less
Noncontact evaluation for interface states by photocarrier counting
NASA Astrophysics Data System (ADS)
Furuta, Masaaki; Shimizu, Kojiro; Maeta, Takahiro; Miyashita, Moriya; Izunome, Koji; Kubota, Hiroshi
2018-03-01
We have developed a noncontact measurement method that enables in-line measurement and does not have any test element group (TEG) formation. In this method, the number of photocarriers excited from the interface states are counted which is called “photocarrier counting”, and then the energy distribution of the interface states density (D it) is evaluated by spectral light excitation. In our previous experiment, the method used was a preliminary contact measurement method at the oxide on top of the Si wafer. We developed, at this time, a D it measurement method as a noncontact measurement with a gap between the probes and the wafer. The shallow trench isolation (STI) sidewall has more localized interface states than the region under the gate electrode. We demonstrate the noncontact measurement of trapped carriers from interface states using wafers of three different crystal plane orientations. The demonstration will pave the way for evaluating STI sidewall interface states in future studies.
An Approach Toward Synthesis of Bridgmanite in Dynamic Compression Experiments
NASA Astrophysics Data System (ADS)
Reppart, J. J.
2015-12-01
Bridgmanite occurs in heavily shocked meteorites and provides a useful constraint on pressure-temperature conditions during shock-metamorphism. Its occurrence also provides constraints on the shock release path. Shock-release and shock duration are important parameters in estimating the size of impactors that generate the observed shock metamorphic record. Thus, it is timely to examine if bridgmanite can be synthesized in dynamic compression experiments with the goal of establishing a correlation between shock duration and grainsize. Up to now only one high pressure polymorph of an Mg-silicate has been synthesized AND recovered in a shock experiment (wadsleyite). Therefore, it is not given that shock synthesis of bridgmanite is possible. This project started recently, so we present an outline of shock experiment designs and potentially results from the first experiments. FUNDING ACKNOWLEDGMENT UNLV HiPSEC: This research was sponsored (or sponsored in part) by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Cooperative Agreement #DE-NA0001982. HPCAT: "[Portions of this work were]/[This work was] performed at HPCAT (Sector 16), Advanced Photon Source (APS), Argonne National Laboratory. HPCAT operations are supported by DOE-NNSA under Award No. DE-NA0001974 and DOE-BES under Award No. DE-FG02-99ER45775, with partial instrumentation funding by NSF. APS is supported by DOE-BES, under Contract No. DE-AC02-06CH11357."
Optimization of a chondrogenic medium through the use of factorial design of experiments.
Enochson, Lars; Brittberg, Mats; Lindahl, Anders
2012-12-01
The standard culture system for in vitro cartilage research is based on cells in a three-dimensional micromass culture and a defined medium containing the chondrogenic key growth factor, transforming growth factor (TGF)-β1. The aim of this study was to optimize the medium for chondrocyte micromass culture. Human chondrocytes were cultured in different media formulations, designed with a factorial design of experiments (DoE) approach and based on the standard medium for redifferentiation. The significant factors for the redifferentiation of the chondrocytes were determined and optimized in a two-step process through the use of response surface methodology. TGF-β1, dexamethasone, and glucose were significant factors for differentiating the chondrocytes. Compared to the standard medium, TGF-β1 was increased 30%, dexamethasone reduced 50%, and glucose increased 22%. The potency of the optimized medium was validated in a comparative study against the standard medium. The optimized medium resulted in micromass cultures with increased expression of genes important for the articular chondrocyte phenotype and in cultures with increased glycosaminoglycan/DNA content. Optimizing the standard medium with the efficient DoE method, a new medium that gave better redifferentiation for articular chondrocytes was determined.
Design-of-experiments to Reduce Life-cycle Costs in Combat Aircraft Inlets
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Baust, Henry D.; Agrell, Johan
2003-01-01
It is the purpose of this study to demonstrate the viability and economy of Design- of-Experiments (DOE), to arrive at micro-secondary flow control installation designs that achieve optimal inlet performance for different mission strategies. These statistical design concepts were used to investigate the properties of "low unit strength" micro-effector installation. "Low unit strength" micro-effectors are micro-vanes, set a very low angle-of incidence, with very long chord lengths. They are designed to influence the neat wall inlet flow over an extended streamwise distance. In this study, however, the long chord lengths were replicated by a series of short chord length effectors arranged in series over multiple bands of effectors. In order to properly evaluate the performance differences between the single band extended chord length installation designs and the segmented multiband short chord length designs, both sets of installations must be optimal. Critical to achieving optimal micro-secondary flow control installation designs is the understanding of the factor interactions that occur between the multiple bands of micro-scale vane effectors. These factor interactions are best understood and brought together in an optimal manner through a structured DOE process, or more specifically Response Surface Methods (RSM).
Monitoring of the secondary drying in freeze-drying of pharmaceuticals.
Fissore, Davide; Pisano, Roberto; Barresi, Antonello A
2011-02-01
This paper is focused on the in-line monitoring of the secondary drying phase of a lyophilization process. An innovative software sensor is presented to estimate reliably the residual moisture in the product and the time required to complete secondary drying, that is, to reach the target value of the residual moisture or of the desorption rate. Such results are obtained by coupling a mathematical model of the process and the in-line measurement of the solvent desorption rate and by means of the pressure rise test or another sensors (e.g., windmills, laser sensors) that can measure the vapor flux in the drying chamber. The proposed method does not require extracting any vial during the operation or using expensive sensors to measure off-line the residual moisture. Moreover, it does not require any preliminary experiment to determine the relationship between the desorption rate and residual moisture in the product. The effectiveness of the proposed approach is demonstrated by means of experiments carried out in a pilot-scale apparatus: in this case, some vials were extracted from the drying chamber and the moisture content was measured to validate the estimations provided by the soft-sensor. Copyright © 2010 Wiley-Liss, Inc.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Development of Optimal Stressor Scenarios for New Operational Energy Systems
2017-12-01
Analyzing the previous model using a design of experiments (DOE) and regression analysis provides critical information about the associated operational...from experimentation. The resulting system requirements can be used to revisit the design requirements and develop a more robust system. This process...stressor scenarios for acceptance testing. Analyzing the previous model using a design of experiments (DOE) and regression analysis provides critical
2017-09-14
averaging the gage measurements many specimens were not meeting the ASTM D3039 standard tolerance limitations when compared to the designed 3mm and 15 mm...MarkOne) 3D printer. A design of experiment (DOE) we preformed to develop a mathematical model describing the functional relationship between the...6 Design of Experiment (DOE) .................................................................................................. 6 Carbon Fiber
Does Ice Dissolve or Does Halite Melt? A Low-Temperature Liquidus Experiment for Petrology Classes.
ERIC Educational Resources Information Center
Brady, John B.
1992-01-01
Measurement of the compositions and temperatures of H2O-NaCl brines in equilibrium with ice can be used as an easy in-class experimental determination of a liquidus. This experiment emphasizes the symmetry of the behavior of brines with regard to the minerals ice and halite and helps to free students from the conceptual tethers of one-component…
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David; Johnson, Kenneth
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
Elmer-Dixon, Margaret M; Bowler, Bruce E
2018-05-19
A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Quantitative phase microscopy for cellular dynamics based on transport of intensity equation.
Li, Ying; Di, Jianglei; Ma, Chaojie; Zhang, Jiwei; Zhong, Jinzhan; Wang, Kaiqiang; Xi, Teli; Zhao, Jianlin
2018-01-08
We demonstrate a simple method for quantitative phase imaging of tiny transparent objects such as living cells based on the transport of intensity equation. The experiments are performed using an inverted bright field microscope upgraded with a flipping imaging module, which enables to simultaneously create two laterally separated images with unequal defocus distances. This add-on module does not include any lenses or gratings and is cost-effective and easy-to-alignment. The validity of this method is confirmed by the measurement of microlens array and human osteoblastic cells in culture, indicating its potential in the applications of dynamically measuring living cells and other transparent specimens in a quantitative, non-invasive and label-free manner.
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
New force replica exchange method and protein folding pathways probed by force-clamp technique.
Kouza, Maksim; Hu, Chin-Kun; Li, Mai Suan
2008-01-28
We have developed a new extended replica exchange method to study thermodynamics of a system in the presence of external force. Our idea is based on the exchange between different force replicas to accelerate the equilibrium process. This new approach was applied to obtain the force-temperature phase diagram and other thermodynamical quantities of the three-domain ubiquitin. Using the C(alpha)-Go model and the Langevin dynamics, we have shown that the refolding pathways of single ubiquitin depend on which terminus is fixed. If the N end is fixed then the folding pathways are different compared to the case when both termini are free, but fixing the C terminal does not change them. Surprisingly, we have found that the anchoring terminal does not affect the pathways of individual secondary structures of three-domain ubiquitin, indicating the important role of the multidomain construction. Therefore, force-clamp experiments, in which one end of a protein is kept fixed, can probe the refolding pathways of a single free-end ubiquitin if one uses either the polyubiquitin or a single domain with the C terminus anchored. However, it is shown that anchoring one end does not affect refolding pathways of the titin domain I27, and the force-clamp spectroscopy is always capable to predict folding sequencing of this protein. We have obtained the reasonable estimate for unfolding barrier of ubiquitin, using the microscopic theory for the dependence of unfolding time on the external force. The linkage between residue Lys48 and the C terminal of ubiquitin is found to have the dramatic effect on the location of the transition state along the end-to-end distance reaction coordinate, but the multidomain construction leaves the transition state almost unchanged. We have found that the maximum force in the force-extension profile from constant velocity force pulling simulations depends on temperature nonlinearly. However, for some narrow temperature interval this dependence becomes linear, as have been observed in recent experiments.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Observation-Driven Configuration of Complex Software Systems
NASA Astrophysics Data System (ADS)
Sage, Aled
2010-06-01
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.
Measuring the Density of Liquid Targets in the SeaQuest Experiment
NASA Astrophysics Data System (ADS)
Xi, Zhaojia; SeaQuest/E906 Collaboration
2015-10-01
The SeaQuest (E906) experiment, using the 120 GeV proton beam from the Main Injector at the Fermi National Accelerator Lab (FNAL), is studying the quark and antiquark structure of the nucleon using the Drell-Yan process. Based on the cross section ratios, σ (p + d) / σ (p + p) , SeaQuest will extract the Bjorken-x dependnce of the d / u ratio. The measurement will cover the large region (x > 0 . 25) with improved accuracy compared to the previous E866/Nusea experiment. Liquid D2 (LD2) and Liquid H2 (LH2) are the targets used in the SeaQuest experiment. The densities of LD2 and LH2 targets are two important quantities for the determination of the d / u ratio. We measure the pressure and temperature inside the flasks, from which the densities are calculated. The method, measurements and results of this study will be presented. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.
Verbal overshadowing of visual memories: some things are better left unsaid.
Schooler, J W; Engstler-Schooler, T Y
1990-01-01
It is widely believed that verbal processing generally improves memory performance. However, in a series of six experiments, verbalizing the appearance of previously seen visual stimuli impaired subsequent recognition performance. In Experiment 1, subjects viewed a videotape including a salient individual. Later, some subjects described the individual's face. Subjects who verbalized the face performed less well on a subsequent recognition test than control subjects who did not engage in memory verbalization. The results of Experiment 2 replicated those of Experiment 1 and further clarified the effect of memory verbalization by demonstrating that visualization does not impair face recognition. In Experiments 3 and 4 we explored the hypothesis that memory verbalization impairs memory for stimuli that are difficult to put into words. In Experiment 3 memory impairment followed the verbalization of a different visual stimulus: color. In Experiment 4 marginal memory improvement followed the verbalization of a verbal stimulus: a brief spoken statement. In Experiments 5 and 6 the source of verbally induced memory impairment was explored. The results of Experiment 5 suggested that the impairment does not reflect a temporary verbal set, but rather indicates relatively long-lasting memory interference. Finally, Experiment 6 demonstrated that limiting subjects' time to make recognition decisions alleviates the impairment, suggesting that memory verbalization overshadows but does not eradicate the original visual memory. This collection of results is consistent with a recording interference hypothesis: verbalizing a visual memory may produce a verbally biased memory representation that can interfere with the application of the original visual memory.
Efficient Variable Selection Method for Exposure Variables on Binary Data
NASA Astrophysics Data System (ADS)
Ohno, Manabu; Tarumi, Tomoyuki
In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.
Majewski, M.; Desjardina, R.; Rochette, P.; Pattey, E.; Selber, J.; Glotfelty, D.
1993-01-01
The field experiment reported here applied the relaxed eddy accumulation (REA) technique to the measurement of triallate (TA) and trifluralin (TF) volatilization from fallow soil. A critical analysis of the REA system used in this experiment is done, and the fluxes are compared to those obtained by the aerodynamic-gradient (AG) technique. The measured cumulative volatilization losses, corrected for the effective upwind source area (footprint), for the AG system were higher than with the REA system. The differences between the methods over the first 5 days of the experiment were 27 and 13% for TA and TF, respectively. A mass balance based on the amount of parent compounds volatilized from soil during the first 5 days of the experiment showed a 110 and 70% and a 79 and 61% accountability for triallate and trifluralin by the AG and REA methods, respectively. These results also show that the non-footprint-corrected AG flux values underestimated the volatilization flux by approximately 16%. The footprint correction model used in this experiment does not presently have the capability of accounting for changes in atmospheric stability. However, these values still provide an indication of the most likely upwind area affecting the evaporative flux estimations. The soil half-lives for triallate and trifluralin were 9.8 and 7.0 days, respectively. ?? 1992 American Chemical Society.
Letizia, M C; Cornaglia, M; Tranchida, G; Trouillon, R; Gijs, M A M
2018-01-22
When studying the drug effectiveness towards a target model, one should distinguish the effects of the drug itself and of all the other factors that could influence the screening outcome. This comprehensive knowledge is crucial, especially when model organisms are used to study the drug effect at a systemic level, as a higher number of factors can influence the drug-testing outcome. Covering the entire experimental domain and studying the effect of the simultaneous change in several factors would require numerous experiments, which are costly and time-consuming. Therefore, a design of experiment (DoE) approach in drug-testing is emerging as a robust and efficient method to reduce the use of resources, while maximizing the knowledge of the process. Here, we used a 3-factor-Doehlert DoE to characterize the concentration-dependent effect of the drug doxycycline on the development duration of the nematode Caenorhabditis elegans. To cover the experimental space, 13 experiments were designed and performed, where different doxycycline concentrations were tested, while also varying the temperature and the food amount, which are known to influence the duration of C. elegans development. A microfluidic platform was designed to isolate and culture C. elegans larvae, while testing the doxycycline effect with full control of temperature and feeding over the entire development. Our approach allowed predicting the doxycycline effect on C. elegans development in the complete drug concentration/temperature/feeding experimental space, maximizing the understanding of the effect of this antibiotic on the C. elegans development and paving the way towards a standardized and optimized drug-testing process.
Pardo, O; Yusà, V; Coscollà, C; León, N; Pastor, A
2007-07-01
A selective and sensitive procedure has been developed and validated for the determination of acrylamide in difficult matrices, such as coffee and chocolate. The proposed method includes pressurised fluid extraction (PFE) with acetonitrile, florisil clean-up purification inside the PFE extraction cell and detection by liquid chromatography (LC) coupled to atmospheric pressure ionisation in positive mode tandem mass spectrometry (APCI-MS-MS). Comparison of ionisation sources (atmospheric pressure chemical ionisation (APCI), atmospheric pressure photoionization (APPI) and the combined APCI/APPI) and clean-up procedures were carried out to improve the analytical signal. The main parameters affecting the performance of the different ionisation sources were previously optimised using statistical design of experiments (DOE). PFE parameters were also optimised by DOE. For quantitation, an isotope dilution approach was used. The limit of quantification (LOQ) of the method was 1 microg kg(-1) for coffee and 0.6 microg kg(-1) for chocolate. Recoveries ranged between 81-105% in coffee and 87-102% in chocolate. The accuracy was evaluated using a coffee reference test material FAPAS T3008. Using the optimised method, 20 coffee and 15 chocolate samples collected from Valencian (Spain) supermarkets, were investigated for acrylamide, yielding median levels of 146 microg kg(-1) in coffee and 102 microg kg(-1) in chocolate.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan
2017-12-01
A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.
Retrieval attempts enhance learning, but retrieval success (versus failure) does not matter.
Kornell, Nate; Klein, Patricia Jacobs; Rawson, Katherine A
2015-01-01
Retrieving information from memory enhances learning. We propose a 2-stage framework to explain the benefits of retrieval. Stage 1 takes place as one attempts to retrieve an answer, which activates knowledge related to the retrieval cue. Stage 2 begins when the answer becomes available, at which point appropriate connections are strengthened and inappropriate connections may be weakened. This framework raises a basic question: Does it matter whether Stage 2 is initiated via successful retrieval or via an external presentation of the answer? To test this question, we asked participants to attempt retrieval and then randomly assigned items (which were equivalent otherwise) to be retrieved successfully or to be copied (i.e., not retrieved). Experiments 1, 2, 4, and 5 tested assumptions necessary for interpreting Experiments 3a, 3b, and 6. Experiments 3a, 3b, and 6 did not support the hypothesis that retrieval success produces more learning than does retrieval failure followed by feedback. It appears that retrieval attempts promote learning but retrieval success per se does not. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Experimental Investigation of Normal Shock Boundary-Layer Interaction with Hybrid Flow Control
NASA Technical Reports Server (NTRS)
Vyas, Manan A.; Hirt, Stefanie M.; Anderson, Bernhard H.
2012-01-01
Hybrid flow control, a combination of micro-ramps and micro-jets, was experimentally investigated in the 15x15 cm Supersonic Wind Tunnel (SWT) at the NASA Glenn Research Center. Full factorial, a design of experiments (DOE) method, was used to develop a test matrix with variables such as inter-ramp spacing, ramp height and chord length, and micro-jet injection flow ratio. A total of 17 configurations were tested with various parameters to meet the DOE criteria. In addition to boundary-layer measurements, oil flow visualization was used to qualitatively understand shock induced flow separation characteristics. The flow visualization showed the normal shock location, size of the separation, path of the downstream moving counter-rotating vortices, and corner flow effects. The results show that hybrid flow control demonstrates promise in reducing the size of shock boundary-layer interactions and resulting flow separation by means of energizing the boundary layer.
Liravi, Farzad; Vlasea, Mihaela
2018-06-01
The data included in this article provides additional supporting information on our recent publication (Liravi et al., 2018 [1]) on a novel hybrid additive manufacturing (AM) method for fabrication of three-dimensional (3D) structures from silicone powder. A design of experiments (DoE) study has been carried out to optimize the geometrical fidelity of AM-made parts. This manuscript includes the details of a multi-level factorial DOE and the response optimization results. The variation in the temperature of powder-bed when exposed to heat is plotted as well. Furthermore, the effect of blending ratio of two parts of silicone binder on its curing speed was investigated by conducting DSC tests on a silicone binder with 100:2 precursor to curing agent ratio. The hardness of parts fabricated with non-optimum printing conditions are included and compared.
Artificial neural networks in evaluation and optimization of modified release solid dosage forms.
Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica
2012-10-18
Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms.
Artificial Neural Networks in Evaluation and Optimization of Modified Release Solid Dosage Forms
Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica
2012-01-01
Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms. PMID:24300369
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.
Intelligent automated control of life support systems using proportional representations.
Wu, Annie S; Garibay, Ivan I
2004-06-01
Effective automatic control of Advanced Life Support Systems (ALSS) is a crucial component of space exploration. An ALSS is a coupled dynamical system which can be extremely sensitive and difficult to predict. As a result, such systems can be difficult to control using deliberative and deterministic methods. We investigate the performance of two machine learning algorithms, a genetic algorithm (GA) and a stochastic hill-climber (SH), on the problem of learning how to control an ALSS, and compare the impact of two different types of problem representations on the performance of both algorithms. We perform experiments on three ALSS optimization problems using five strategies with multiple variations of a proportional representation for a total of 120 experiments. Results indicate that although a proportional representation can effectively boost GA performance, it does not necessarily have the same effect on other algorithms such as SH. Results also support previous conclusions that multivector control strategies are an effective method for control of coupled dynamical systems.
Kerr, Kathleen F; Serikawa, Kyle A; Wei, Caimiao; Peters, Mette A; Bumgarner, Roger E
2007-01-01
The reference design is a practical and popular choice for microarray studies using two-color platforms. In the reference design, the reference RNA uses half of all array resources, leading investigators to ask: What is the best reference RNA? We propose a novel method for evaluating reference RNAs and present the results of an experiment that was specially designed to evaluate three common choices of reference RNA. We found no compelling evidence in favor of any particular reference. In particular, a commercial reference showed no advantage in our data. Our experimental design also enabled a new way to test the effectiveness of pre-processing methods for two-color arrays. Our results favor using intensity normalization and foregoing background subtraction. Finally, we evaluate the sensitivity and specificity of data quality filters, and we propose a new filter that can be applied to any experimental design and does not rely on replicate hybridizations.
Sunderland, N; Bristed, H; Gudes, O; Boddy, J; Da Silva, M
2012-09-01
This paper introduces sensory ethnography as a methodology for studying residents' daily lived experience of social determinants of health (SDOH) in place. Sensory ethnography is an expansive option for SDOH research because it encourages participating researchers and residents to "turn up" their senses to identify how previously ignored or "invisible" sensory experiences shape local health and wellbeing. Sensory ethnography creates a richer and deeper understanding of the relationships between place and health than existing research methods that focus on things that are more readily observable or quantifiable. To highlight the methodology in use we outline our research activities and learnings from the Sensory Ethnography of Logan-Beaudesert (SELB) pilot study. We discuss theory, data collection methods, preliminary outcomes, and methodological learnings that will be relevant to researchers who wish to use sensory ethnography or develop deeper understandings of place and health generally. Copyright © 2012 Elsevier Ltd. All rights reserved.
Smoke detection using GLCM, wavelet, and motion
NASA Astrophysics Data System (ADS)
Srisuwan, Teerasak; Ruchanurucks, Miti
2014-01-01
This paper presents a supervised smoke detection method that uses local and global features. This framework integrates and extends notions of many previous works to generate a new comprehensive method. First chrominance detection is used to screen areas that are suspected to be smoke. For these areas, local features are then extracted. The features are among homogeneity of GLCM and energy of wavelet. Then, global feature of motion of the smoke-color areas are extracted using a space-time analysis scheme. Finally these features are used to train an artificial intelligent. Here we use neural network, experiment compares importance of each feature. Hence, we can really know which features among those used by many previous works are really useful. The proposed method outperforms many of the current methods in the sense of correctness, and it does so in a reasonable computation time. It even has less limitation than conventional smoke sensors when used in open space. Best method for the experimental results is to use all the mentioned features as expected, to insure which is the best experiment result can be achieved. The achieved with high accuracy of result expected output is high value of true positive and low value of false positive. And show that our algorithm has good robustness for smoke detection.
Warpage analysis on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.
MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.
Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro
2018-06-01
The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.
An Effective 3D Ear Acquisition System
Liu, Yahui; Lu, Guangming; Zhang, David
2015-01-01
The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553
An Effective 3D Ear Acquisition System.
Liu, Yahui; Lu, Guangming; Zhang, David
2015-01-01
The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.
Proof of concept of a simple computer-assisted technique for correcting bone deformities.
Ma, Burton; Simpson, Amber L; Ellis, Randy E
2007-01-01
We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.
Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner
2013-06-01
The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan
2017-12-01
CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parameterization-based tracking for the P2 experiment
NASA Astrophysics Data System (ADS)
Sorokin, Iurii
2017-08-01
The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.
Results from the VALUE perfect predictor experiment: process-based evaluation
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit
2016-04-01
Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.
WHC significant lessons learned 1993--1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickford, J.C.
1997-12-12
A lesson learned as defined in DOE-STD-7501-95, Development of DOE Lessons Learned Programs, is: A ``good work practice`` or innovative approach that is captured and shared to promote repeat applications or an adverse work practice or experience that is captured and shared to avoid a recurrence. The key word in both parts of this definition is ``shared``. This document was published to share a wide variety of recent Hanford experiences with other DOE sites. It also provides a valuable tool to be used in new employee and continuing training programs at Hanford facilities and at other DOE locations. This manualmore » is divided into sections to facilitate extracting appropriate subject material when developing training modules. Many of the bulletins could be categorized into more than one section, however, so examination of other related sections is encouraged.« less
Stretching the Traditional Notion of Experiment in Computing: Explorative Experiments.
Schiaffonati, Viola
2016-06-01
Experimentation represents today a 'hot' topic in computing. If experiments made with the support of computers, such as computer simulations, have received increasing attention from philosophers of science and technology, questions such as "what does it mean to do experiments in computer science and engineering and what are their benefits?" emerged only recently as central in the debate over the disciplinary status of the discipline. In this work we aim at showing, also by means of paradigmatic examples, how the traditional notion of controlled experiment should be revised to take into account a part of the experimental practice in computing along the lines of experimentation as exploration. Taking inspiration from the discussion on exploratory experimentation in the philosophy of science-experimentation that is not theory-driven-we advance the idea of explorative experiments that, although not new, can contribute to enlarge the debate about the nature and role of experimental methods in computing. In order to further refine this concept we recast explorative experiments as socio-technical experiments, that test new technologies in their socio-technical contexts. We suggest that, when experiments are explorative, control should be intended in a posteriori form, in opposition to the a priori form that usually takes place in traditional experimental contexts.
The Development of a Web-Based Urban Soundscape Evaluation System
NASA Astrophysics Data System (ADS)
Sudarsono, A. S.; Sarwono, J.
2018-05-01
Acoustic quality is one of the important aspects of urban design. It is usually evaluated based on how loud the urban environment is. However, this approach does not consider people’s perception of the urban acoustic environment. Therefore, a different method has been developed based on the perception of the acoustic environment using the concept of soundscape. Soundscape is defined as the acoustic environment perceived by people who are part of the environment. This approach considers the relationship between the sound source, the environment, and the people. The analysis of soundscape considers many aspects such as cultural aspects, people’s expectations, people’s experience of space, and social aspects. Soundscape affects many aspects of human life such as culture, health, and the quality of life. Urban soundscape management and planning must be integrated with the other aspect of urban design, both in the design and the improvement stages. The soundscape concept seeks to make the acoustic environment as pleasant as possible in a space with or without uncomfortable sound sources. Soundscape planning includes the design of physical features to achieve a positive perceptual outcome. It is vital to gather data regarding the relationship between humans and the components of a soundscape, e.g., sound sources, features of the physical environment, the functions of a space, and the expectation of the sound source. The data can be measured and gathered using several soundscape evaluation methods. Soundscape evaluation is usually conducted using in-situ surveys and laboratory experiments using a multi-speaker system. Although these methods have been validated and are widely used in soundscape analysis, there are some limitations in the application. The in-situ survey needs to be done at one time with many people at the same time because it is hard to replicate the acoustic environment. Conversely, the laboratory experiment does not have a problem with the repetition of the experiment. This method requires a room with a multi-speaker reproduction system. This project used a different method to analyse soundscape developed using headphones via the internet. The internet system for data gathering has been established; a website has enabled to reproduce high-quality audio and it has a system to design online questionnaires. Furthermore, the development of a virtual reality system allows the reproduction of virtual audio-visual stimulus on a website. Although the website has an established system to gather the required data, the problem is the validation of the reproduction system for soundscape analysis, which needs to be done with consideration of several factors: the suitable recording system, the effect of headphone variation, the calibration of the system, and the perception result from internet-based acoustic environment reproduction. This study aims to develop and validate a web-based urban soundscape evaluation method. By using this method, the experiment can be repeated easily and data can be gathered from many respondents. Furthermore, the simplicity of the system allows for the application by the stakeholders in urban design. The data gathered from this system is important for the design of an urban area with consideration of the acoustic aspects.
Study of the technics of coating stripping and FBG writing on polyimide fiber
NASA Astrophysics Data System (ADS)
Song, ZhiQiang; Qi, HaiFeng; Ni, JiaSheng; Wang, Chang
2017-10-01
Compared with ordinary optical fiber, polyimide fiber has the characteristics of high temperature resistance and high strength, which has important application in the field of optical fiber sensing. The common methods of polyimide coating stripping were introduced in this paper, including high temperature stripping, chemical stripping and arc ablation. In order to meet the requirements of FBG writing technology, a method using argon ion laser ablation coating was proposed. The method can precisely control the stripping length of the coating and completely does not affect the tensile strength of the optical fiber. According to the experiment, the fabrication process of polyimide FBG is stripping-hydrogen loadingwriting. Under the same conditions, 10 FBG samples were fabricated with good uniformity of wavelength bandwidth and reflectivity. UV laser ablation of polyimide coating has been proved to be a safe, reliable and efficient method.
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Successfully Implementing Net-Zero Energy Policy through the Air Force Military Construction Program
2013-03-01
Meets Does not meet Does not meet Meets Renewable Farms Meets Meets Meets Meets On-Site (Distributed Generation) Meets* Meets* Meets Meets...independence, nor does it allow for net-zero energy installations. Developing centralized renewable energy farms is another method for obtaining...combination of centralized renewable energy farms and distributed generation methods. The specific combination of methods an installation will utilize
NASA Astrophysics Data System (ADS)
Mariana Nicoara, Floare
2016-04-01
My name is Nicoara Floarea and I am teacher at Secondary School Calatele and I teach students from preparatory class and the second grade . They are six-eight years old. In my activity, for introducing scientific concepts to my students, I use various and active methods or traditional methods including experiments. The experiment stimulates students' curiosity, their creativity, the understanding and knowledge taught accessibility. I propose you two such experiments: The life cycle of the plants (long-term experiment, with rigorous observation time):We use beans, wheat or other; They are grown in pots and on the cotton soaked with water,keeping under students' observation protecting them ( just soak them regularly) and we waiting the plants rise. For discussions and comments of plant embryo development we use the plants which rose on the cotton soaked with water plants at the end of the first week. Last school year we had in the pot climbing beans which in May made pods. They were not too great but our experiment was a success. The students could deduce that there will develop those big beans which after drying will be planted again. The influence of light on plants (average duration experiment with the necessary observation time): We use two pots in which plants are of the same type (two geraniums), one of them is situated so as to get direct sunlight and other plant we put in a closed box. Although we wet both plants after a week we see that the plant that benefited from sunlight has turned strain in direct sunlight, developing normally in return the plant out of the box I have yellowed leaves, photosynthesis does not She has occurred . Students will understand the vital role of the Sun in plants' life, both in the classroom and in nature. The experiment is a method of teaching students extremely pleasant, with a remarkable percentage of acquiring more knowledge.
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
The Concept of Experience by John Dewey Revisited: Conceiving, Feeling and "Enliving"
ERIC Educational Resources Information Center
Hohr, Hansjorg
2013-01-01
"The concept of experience by John Dewey revisited: conceiving, feeling and 'enliving'." Dewey takes a few steps towards a differentiation of the concept of experience, such as the distinction between primary and secondary experience, or between ordinary (partial, raw, primitive) experience and complete, aesthetic experience. However, he does not…
NASA Astrophysics Data System (ADS)
Bonhommeau, David; Truhlar, Donald G.
2008-07-01
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Bonhommeau, David; Truhlar, Donald G
2008-07-07
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Chattopadhyay, Sudip; Chaudhuri, Rajat K; Freed, Karl F
2011-04-28
The improved virtual orbital-complete active space configuration interaction (IVO-CASCI) method enables an economical and reasonably accurate treatment of static correlation in systems with significant multireference character, even when using a moderate basis set. This IVO-CASCI method supplants the computationally more demanding complete active space self-consistent field (CASSCF) method by producing comparable accuracy with diminished computational effort because the IVO-CASCI approach does not require additional iterations beyond an initial SCF calculation, nor does it encounter convergence difficulties or multiple solutions that may be found in CASSCF calculations. Our IVO-CASCI analytical gradient approach is applied to compute the equilibrium geometry for the ground and lowest excited state(s) of the theoretically very challenging 2,6-pyridyne, 1,2,3-tridehydrobenzene and 1,3,5-tridehydrobenzene anionic systems for which experiments are lacking, accurate quantum calculations are almost completely absent, and commonly used calculations based on single reference configurations fail to provide reasonable results. Hence, the computational complexity provides an excellent test for the efficacy of multireference methods. The present work clearly illustrates that the IVO-CASCI analytical gradient method provides a good description of the complicated electronic quasi-degeneracies during the geometry optimization process for the radicaloid anions. The IVO-CASCI treatment produces almost identical geometries as the CASSCF calculations (performed for this study) at a fraction of the computational labor. Adiabatic energy gaps to low lying excited states likewise emerge from the IVO-CASCI and CASSCF methods as very similar. We also provide harmonic vibrational frequencies to demonstrate the stability of the computed geometries.
A Safer, Discovery-Based Nucleophilic Substitution Experiment
ERIC Educational Resources Information Center
Horowitz, Gail
2009-01-01
A discovery-based nucleophilic substitution experiment is described in which students compare the reactivity of chloride and iodide ions in an S[subscript N]2 reaction. This experiment improves upon the well-known "Competing Nucleophiles" experiment in that it does not involve the generation of hydrogen halide gas. The experiment also introduces…
Image Quality Ranking Method for Microscopy
Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.
2016-01-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703
On correct evaluation techniques of brightness enhancement effect measurement data
NASA Astrophysics Data System (ADS)
Kukačka, Leoš; Dupuis, Pascal; Motomura, Hideki; Rozkovec, Jiří; Kolář, Milan; Zissis, Georges; Jinno, Masafumi
2017-11-01
This paper aims to establish confidence intervals of the quantification of brightness enhancement effects resulting from the use of pulsing bright light. It is found that the methods used so far may yield significant bias in the published results, overestimating or underestimating the enhancement effect. The authors propose to use a linear algebra method called the total least squares. Upon an example dataset, it is shown that this method does not yield biased results. The statistical significance of the results is also computed. It is concluded over an observation set that the currently used linear algebra methods present many patterns of noise sensitivity. Changing algorithm details leads to inconsistent results. It is thus recommended to use the method with the lowest noise sensitivity. Moreover, it is shown that this method also permits one to obtain an estimate of the confidence interval. This paper neither aims to publish results about a particular experiment nor to draw any particular conclusion about existence or nonexistence of the brightness enhancement effect.
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
Methods Data Qualification Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sam Alessi; Tami Grimmett; Leng Vang
The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less
Data Assimilation in the Solar Wind: Challenges and First Results
NASA Astrophysics Data System (ADS)
Lang, Matthew; Browne, Phil; van Leeuwen, Peter Jan; Owens, Matt
2017-04-01
Data assimilation (DA) is currently underused in the solar wind field to improve the modelled variables using observations. Data assimilation has been used in Numerical Weather Prediction (NWP) models with great success, and it can be seen that the improvement of DA methods in NWP modelling has led to improvements in forecasting skill over the past 20-30 years. The state of the art DA methods developed for NWP modelling have never been applied to space weather models, hence it is important to implement the improvements that can be gained from these methods to improve our understanding of the solar wind and how to model it. The ENLIL solar wind model has been coupled to the EMPIRE data assimilation library in order to apply these advanced data assimilation methods to a space weather model. This coupling allows multiple data assimilation methods to be applied to ENLIL with relative ease. I shall discuss twin experiments that have been undertaken, applying the LETKF to the ENLIL model when a CME occurs in the observation and when it does not. These experiments show that there is potential in the application of advanced data assimilation methods to the solar wind field, however, there is still a long way to go until it can be applied effectively. I shall discuss these issues and suggest potential avenues for future research in this area.
Littel, Marianne; van Schie, Kevin; van den Hout, Marcel A.
2017-01-01
ABSTRACT Background: Eye movement desensitization and reprocessing (EMDR) is an effective psychological treatment for posttraumatic stress disorder. Recalling a memory while simultaneously making eye movements (EM) decreases a memory’s vividness and/or emotionality. It has been argued that non-specific factors, such as treatment expectancy and experimental demand, may contribute to the EMDR’s effectiveness. Objective: The present study was designed to test whether expectations about the working mechanism of EMDR would alter the memory attenuating effects of EM. Two experiments were conducted. In Experiment 1, we examined the effects of pre-existing (non-manipulated) knowledge of EMDR in participants with and without prior knowledge. In Experiment 2, we experimentally manipulated prior knowledge by providing participants without prior knowledge with correct or incorrect information about EMDR’s working mechanism. Method: Participants in both experiments recalled two aversive, autobiographical memories during brief sets of EM (Recall+EM) or keeping eyes stationary (Recall Only). Before and after the intervention, participants scored their memories on vividness and emotionality. A Bayesian approach was used to compare two competing hypotheses on the effects of (existing/given) prior knowledge: (1) Prior (correct) knowledge increases the effects of Recall+EM vs. Recall Only, vs. (2) prior knowledge does not affect the effects of Recall+EM. Results: Recall+EM caused greater reductions in memory vividness and emotionality than Recall Only in all groups, including the incorrect information group. In Experiment 1, both hypotheses were supported by the data: prior knowledge boosted the effects of EM, but only modestly. In Experiment 2, the second hypothesis was clearly supported over the first: providing knowledge of the underlying mechanism of EMDR did not alter the effects of EM. Conclusions: Recall+EM appears to be quite robust against the effects of prior expectations. As Recall+EM is the core component of EMDR, expectancy effects probably contribute little to the effectiveness of EMDR treatment. PMID:29038685
Does perceived stress mediate the effect of cultural consonance on depression?
Balieiro, Mauro C; Dos Santos, Manoel Antônio; Dos Santos, José Ernesto; Dressler, William W
2011-11-01
The importance of appraisal in the stress process is unquestioned. Experience in the social environment that impacts outcomes such as depression are thought to have these effects because they are appraised as a threat to the individual and overwhelm the individual's capacity to cope. In terms of the nature of social experience that is associated with depression, several recent studies have examined the impact of cultural consonance. Cultural consonance is the degree to which individuals, in their own beliefs and behaviors, approximate the prototypes for belief and behavior encoded in shared cultural models. Low cultural consonance is associated with more depressive symptoms both cross-sectionally and longitudinally. In this paper we ask the question: does perceived stress mediate the effects of cultural consonance on depression? Data are drawn from a longitudinal study of depressive symptoms in the urban community of Ribeirão Preto, Brazil. A sample of 210 individuals was followed for 2 years. Cultural consonance was assessed in four cultural domains, using a mixed-methods research design that integrated techniques of cultural domain analysis with social survey research. Perceived stress was measured with Cohen's Perceived Stress Scale. When cultural consonance was examined separately for each domain, perceived stress partially mediated the impact of cultural consonance in family life and cultural consonance in lifestyle on depressive symptoms. When generalized cultural consonance (combining consonance in all four domains) was examined, there was no evidence of mediation. These results raise questions about how culturally salient experience rises to the level of conscious reflection.
Nishawala, Vinesh V.; Ostoja-Starzewski, Martin; Leamy, Michael J.; ...
2015-09-10
Peridynamics is a non-local continuum mechanics formulation that can handle spatial discontinuities as the governing equations are integro-differential equations which do not involve gradients such as strains and deformation rates. This paper employs bond-based peridynamics. Cellular Automata is a local computational method which, in its rectangular variant on interior domains, is mathematically equivalent to the central difference finite difference method. However, cellular automata does not require the derivation of the governing partial differential equations and provides for common boundary conditions based on physical reasoning. Both methodologies are used to solve a half-space subjected to a normal load, known as Lamb’smore » Problem. The results are compared with theoretical solution from classical elasticity and experimental results. Furthermore, this paper is used to validate our implementation of these methods.« less
NASA Astrophysics Data System (ADS)
Parker, Jeffrey; Lodestro, Lynda; Told, Daniel; Merlo, Gabriele; Ricketson, Lee; Campos, Alejandro; Jenko, Frank; Hittinger, Jeffrey
2017-10-01
Predictive whole-device simulation models will play an increasingly important role in ensuring the success of fusion experiments and accelerating the development of fusion energy. In the core of tokamak plasmas, a separation of timescales between turbulence and transport makes a single direct simulation of both processes computationally expensive. We present the first demonstration of a multiple-timescale method coupling global gyrokinetic simulations with a transport solver to calculate the self-consistent, steady-state temperature profile. Initial results are highly encouraging, with the coupling method appearing robust to the difficult problem of turbulent fluctuations. The method holds potential for integrating first-principles turbulence simulations into whole-device models and advancing the understanding of global plasma behavior. Work supported by US DOE under Contract DE-AC52-07NA27344 and the Exascale Computing Project (17-SC-20-SC).
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Oweis, Ghanem F; Dunmire, Barbrina L; Cunitz, Bryan W; Bailey, Michael R
2015-01-01
Transcutaneous focused ultrasound (US) is used to propel kidney stones using acoustic radiation force. It is important to estimate the level of heating generated at the stone/tissue interface for safety assessment. An in-vitro experiment is conducted to measure the temperature rise in a tissue-mimicking phantom with an embedded artificial stone and subjected to a focused beam from an imaging US array. A novel optical-imaging-based thermometry method is described using an optically clear tissue phantom. Measurements are compared to the output from a fine wire thermocouple placed on the stone surface. The optical method has good sensitivity, and it does not suffer from artificial viscous heating typically observed with invasive probes and thermocouples.
[Remote sensing of atmospheric trace gas by airborne passive FTIR].
Gao, Min-quang; Liu, Wen-qing; Zhang, Tian-shu; Liu, Jian-guo; Lu, Yi-huai; Wang, Ya-ping; Xu, Liang; Zhu, Jun; Chen, Jun
2006-12-01
The present article describes the details of aviatic measurement for remote sensing trace gases in atmosphere under various surface backgrounds with airborne passive FTIR. The passive down viewing and remote sensing technique used in the experiment is discussed. The method of acquiring atmospheric trace gases infrared characteristic spectra in complicated background and the algorithm of concentration retrieval are discussed. The concentrations of CO and N2O of boundary-layer atmosphere in experimental region below 1000 m are analyzed quantitatively. This measurement technique and the data analysis method, which does not require a previously measured background spectrum, allow fast and mobile remote detection and identification of atmosphere trace gas in large area, and also can be used for urgent monitoring of pollution accidental breakout.
Analyzing capture zone distributions (CZD) in growth: Theory and applications
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.; Pimpinelli, Alberto; Luis González, Diego
2014-09-01
We have argued that the capture-zone distribution (CZD) in submonolayer growth can be well described by the generalized Wigner distribution (GWD) P(s) =asβ exp(-bs2), where s is the CZ area divided by its average value. This approach offers arguably the most robust (least sensitive to mass transport) method to find the critical nucleus size i, since β ≈ i + 2. Various analytical and numerical investigations, which we discuss, show that the simple GWD expression is inadequate in the tails of the distribution, it does account well for the central regime 0.5 < s < 2, where the data is sufficiently large to be reliably accessible experimentally. We summarize and catalog the many experiments in which this method has been applied.
Does the liquid method of electret forming influence the adhesion of blood platelets?
Lowkis, B; Szymanowicz, M
1995-01-01
This work presents the results of the effect of the electric charge on the adhesion of blood platelets. All experiments were carried out on polyethylene foil. The liquid method was used to form electrets. The evaluation of the electret effect influence on the adhesion of blood platelets was made on the basis of the observation of the electret surface after the contact with fresh citrate human blood group O Rh+ in an electron scanning microscope. Experimental results confirmed the essential influence of the electric charge on the process of adhesion of blood platelets. It was noticed that the preliminary aging of electrets decreases the density of the surface charge and improves the athrombogenic characteristics of polyethylene foil.
Laboratory Diagnosis of Parasites from the Gastrointestinal Tract.
Garcia, Lynne S; Arrowood, Michael; Kokoskin, Evelyne; Paltridge, Graeme P; Pillai, Dylan R; Procop, Gary W; Ryan, Norbert; Shimizu, Robyn Y; Visvesvara, Govinda
2018-01-01
This Practical Guidance for Clinical Microbiology document on the laboratory diagnosis of parasites from the gastrointestinal tract provides practical information for the recovery and identification of relevant human parasites. The document is based on a comprehensive literature review and expert consensus on relevant diagnostic methods. However, it does not include didactic information on human parasite life cycles, organism morphology, clinical disease, pathogenesis, treatment, or epidemiology and prevention. As greater emphasis is placed on neglected tropical diseases, it becomes highly probable that patients with gastrointestinal parasitic infections will become more widely recognized in areas where parasites are endemic and not endemic. Generally, these methods are nonautomated and require extensive bench experience for accurate performance and interpretation. Copyright © 2017 American Society for Microbiology.
Acoustics outreach program for the deaf
NASA Astrophysics Data System (ADS)
Vongsawad, Cameron T.; Berardi, Mark L.; Whiting, Jennifer K.; Lawler, M. Jeannette; Gee, Kent L.; Neilsen, Tracianne B.
2016-03-01
The Hear and See methodology has often been used as a means of enhancing pedagogy by focusing on the two strongest learning senses, but this naturally does not apply to deaf or hard of hearing students. Because deaf students' prior nonaural experiences with sound will vary significantly from those of students with typical hearing, different methods must be used to build understanding. However, the sensory-focused pedagogical principle can be applied in a different way for the Deaf by utilizing the senses of touch and sight, called here the ``See and Feel'' method. This presentation will provide several examples of how acoustics demonstrations have been adapted to create an outreach program for a group of junior high students from a school for the Deaf and discuss challenges encountered.
Ghost hunting—an assessment of ghost particle detection and removal methods for tomographic-PIV
NASA Astrophysics Data System (ADS)
Elsinga, G. E.; Tokgoz, S.
2014-08-01
This paper discusses and compares several methods, which aim to remove spurious peaks, i.e. ghost particles, from the volume intensity reconstruction in tomographic-PIV. The assessment is based on numerical simulations of time-resolved tomographic-PIV experiments in linear shear flows. Within the reconstructed volumes, intensity peaks are detected and tracked over time. These peaks are associated with particles (either ghosts or actual particles) and are characterized by their peak intensity, size and track length. Peak intensity and track length are found to be effective in discriminating between most ghosts and the actual particles, although not all ghosts can be detected using only a single threshold. The size of the reconstructed particles does not reveal an important difference between ghosts and actual particles. The joint distribution of peak intensity and track length however does, under certain conditions, allow a complete separation of ghosts and actual particles. The ghosts can have either a high intensity or a long track length, but not both combined, like all the actual particles. Removing the detected ghosts from the reconstructed volume and performing additional MART iterations can decrease the particle position error at low to moderate seeding densities, but increases the position error, velocity error and tracking errors at higher densities. The observed trends in the joint distribution of peak intensity and track length are confirmed by results from a real experiment in laminar Taylor-Couette flow. This diagnostic plot allows an estimate of the number of ghosts that are indistinguishable from the actual particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
Subliminal access to abstract face representations does not rely on attention.
Harry, Bronson; Davis, Chris; Kim, Jeesun
2012-03-01
The present study used masked repetition priming to examine whether face representations can be accessed without attention. Two experiments using a face recognition task (fame judgement) presented masked repetition and control primes in spatially unattended locations prior to target onset. Experiment 1 (n=20) used the same images as primes and as targets and Experiment 2 (n=17) used different images of the same individual as primes and targets. Repetition priming was observed across both experiments regardless of whether spatial attention was cued to the location of the prime. Priming occurred for both famous and non-famous targets in Experiment 1 but was only reliable for famous targets in Experiment 2, suggesting that priming in Experiment 1 indexed access to view-specific representations whereas priming in Experiment 2 indexed access to view-invariant, abstract representations. Overall, the results indicate that subliminal access to abstract face representations does not rely on attention. Copyright © 2011 Elsevier Inc. All rights reserved.
Does power corrupt or enable? When and why power facilitates self-interested behavior.
DeCelles, Katherine A; DeRue, D Scott; Margolis, Joshua D; Ceranic, Tara L
2012-05-01
Does power corrupt a moral identity, or does it enable a moral identity to emerge? Drawing from the power literature, we propose that the psychological experience of power, although often associated with promoting self-interest, is associated with greater self-interest only in the presence of a weak moral identity. Furthermore, we propose that the psychological experience of power is associated with less self-interest in the presence of a strong moral identity. Across a field survey of working adults and in a lab experiment, individuals with a strong moral identity were less likely to act in self-interest, yet individuals with a weak moral identity were more likely to act in self-interest, when subjectively experiencing power. Finally, we predict and demonstrate an explanatory mechanism behind this effect: The psychological experience of power enhances moral awareness among those with a strong moral identity, yet decreases the moral awareness among those with a weak moral identity. In turn, individuals' moral awareness affects how they behave in relation to their self-interest. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
ERIC Educational Resources Information Center
Lourenço, Fernando; Sappleton, Natalie; Cheng, Ranis
2015-01-01
The authors examined the following questions: Does gender influence the ethicality of enterprise students to a greater extent than it does nascent entrepreneurs? If this is the case, then is it due to factors associated with adulthood such as age, work experience, marital status, and parental status? Sex-role socialization theory and moral…
Why carers use adult day respite: a mixed method case study
2014-01-01
Background We need to improve our understanding of the complex interactions between family carers’ emotional relationships with care-recipients and carers use of support services. This study assessed carer’s expectations and perceptions of adult day respite services and their commitment to using services. Methods A mixed-method case study approach was used with psychological contract providing a conceptual framework. Data collection was situated within an organisational case study, and the total population of carers from the organisation’s day respite service were approached. Fifty respondents provided quantitative and qualitative data through an interview survey. The conceptual framework was expanded to include Maslow’s hierarchy of needs during analysis. Results Carers prioritised benefits for and experiences of care-recipients when making day respite decisions. Respondents had high levels of trust in the service and perceived that the major benefits for care-recipients were around social interaction and meaningful activity with resultant improved well-being. Carers wanted day respite experiences to include all levels of Maslow’s hierarchy of needs from the provision of physiological care and safety through to the higher levels of belongingness, love and esteem. Conclusion The study suggests carers need to trust that care-recipients will have quality experiences at day respite. This study was intended as a preliminary stage for further research and while not generalizable it does highlight key considerations in carers’ use of day respite services. PMID:24906239
Measuring perceived ceiling height in a visual comparison task.
von Castell, Christoph; Hecht, Heiko; Oberfeld, Daniel
2017-03-01
When judging interior space, a dark ceiling is judged to be lower than a light ceiling. The method of metric judgments (e.g., on a centimetre scale) that has typically been used in such tasks may reflect a genuine perceptual effect or it may reflect a cognitively mediated impression. We employed a height-matching method in which perceived ceiling height had to be matched with an adjustable pillar, thus obtaining psychometric functions that allowed for an estimation of the point of subjective equality (PSE) and the difference limen (DL). The height-matching method developed in this paper allows for a direct visual match and does not require metric judgment. It has the added advantage of providing superior precision. Experiment 1 used ceiling heights between 2.90 m and 3.00 m. The PSE proved sensitive to slight changes in perceived ceiling height. The DL was about 3% of the physical ceiling height. Experiment 2 found similar results for lower (2.30 m to 2.50 m) and higher (3.30 m to 3.50 m) ceilings. In Experiment 3, we additionally varied ceiling lightness (light grey vs. dark grey). The height matches showed that the light ceiling appeared significantly higher than the darker ceiling. We therefore attribute the influence of ceiling lightness on perceived ceiling height to a direct perceptual rather than a cognitive effect.
NASA Technical Reports Server (NTRS)
Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy
2013-01-01
The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.
Buxton, Eric C; De Muth, James E
2013-01-01
Constraints in geography and time require cost efficiencies in professional development for pharmacists. Distance learning, with its growing availability and lower intrinsic costs, will likely become more prevalent. The objective of this nonexperimental, postintervention study was to examine the perceptions of pharmacists attending a continuing education program. One group participated in the live presentation, whereas the second group joined via a simultaneous webcast. After the presentation, both groups were surveyed with identical questions concerning their perceptions of their learning environment, course content, and utility to their work. Comparisons across group responses to the summated scales were conducted through the use of Kruskal-Wallis tests. Analysis of the data showed that both the distance and local groups were demographically similar and that both groups were satisfied with the presentation method, audio and visual quality, and both felt that they would be able to apply what they learned in their practice. However, the local group was significantly more satisfied with the learning experience. Distance learning does provide a viable and more flexible method for pharmacy professional development, but does not yet replace the traditional learning environment in all facets of learner preference. Copyright © 2013 Elsevier Inc. All rights reserved.
Brunier, Elisabeth; Le Chapellier, Michel; Dejean, Pierre Henri
2012-01-01
The aims of this paper are to present concept and results of an innovative educational model approach based on ergonomics involvement in industrial project. First we present Cross disciplinary Problem solving Workshop by answering three questions:1) What is a CPW: A partnership between Universities and one or several companies, purposes of it are first to increase health, well being, companies teams competencies, and competitiveness, second to train the "IPOD generation" to include risks prevention in design. 2) How does it work? CPW allows cooperation between experience and new insight through inductive methods. This model follows the Piaget (1) philosophy linking concrete world to abstraction by a learning system associating realization and abstraction. 3) Is it successful? In order to answer this third question we will show examples of studies and models performed during CPWs.It appears that the CPWs produce visible results in companies such as new process designs, new methods, and also changes in lectures. However some less visible results remain unclear: How the company personnel evolve during and after CPW? Does CPW motivate our future engineers enough to continuously improve their skills in risk prevention and innovative design?
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš
2015-09-04
Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.
Magneto Caloric Effect in Ni-Mn-Ga alloys: First Principles and Experimental studies
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Brown, Gregory; Rusanu, Aurelian; Rios, Orlando; Hodges, Jason; Safa-Sefat, Athena; Ludtka, Gerard; Eisenbach, Markus; Evans, Boyd
2012-02-01
Understanding the Magneto-Caloric Effect (MCE) in alloys with real technological potential is important to the development of viable MCE based products. We report results of computational and experimental investigation of a candidate MCE materials Ni-Mn-Ga alloys. The Wang-Landau statistical method is used in tandem with Locally Self-consistent Multiple Scattering (LSMS) method to explore magnetic states of the system. A classical Heisenberg Hamiltonian is parametrized based on these states and used in obtaining the density of magnetic states. The Currie temperature, isothermal entropy change, and adiabatic temperature change are then calculated from the density of states. Experiments to observe the structural and magnetic phase transformations were performed at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) on alloys of Ni-Mn-Ga and Fe-Ni-Mn-Ga-Cu. Data from the observations are discussed in comparison with the computational studies. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution
NASA Astrophysics Data System (ADS)
Holmes, Caroline M.; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-10-01
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution.
Holmes, Caroline M; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-08-21
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
Vezhnovets', T A
2013-12-01
The aim of our study was to examine the influence of age and management experience of executives in healthcare institutions at the style of decision-making. The psychological study of 144 executives was conducted. We found out that the age of executives in healthcare institutions does not affect the style of managerial decision making, while experience in leadership position does. Also it was established that the more experienced leader is, the more often he will make decision in authoritative, autonomous, marginal style and the less management experience is, the more likely is the usage of indulgent and situational style. Moreover, the authoritarian style is typical for younger executives, marginal and autonomous is typical for elder executives.
The student perspective of high school laboratory experiences
NASA Astrophysics Data System (ADS)
Lambert, R. Mitch
High school science laboratory experiences are an accepted teaching practice across the nation despite a lack of research evidence to support them. The purpose of this study was to examine the perspective of students---stakeholders often ignored---on these experiences. Insight into the students' perspective was explored progressively using a grounded theory methodology. Field observations of science classrooms led to an open-ended survey of high school science students, garnering 665 responses. Twelve student interviews then focused on the data and questions evolving from the survey. The student perspective on laboratory experiences revealed varied information based on individual experience. Concurrent analysis of the data revealed that although most students like (348/665) or sometimes like (270/665) these experiences, some consistent factors yielded negative experiences and prompted suggestions for improvement. The category of responses that emerged as the core idea focused on student understanding of the experience. Students desire to understand the why do, the how to, and the what it means of laboratory experiences. Lacking any one of these, the experience loses educational value for them. This single recurring theme crossed the boundaries of age, level in school, gender, and even the student view of lab experiences as positive or negative. This study suggests reflection on the current laboratory activities in which science teachers engage their students. Is the activity appropriate (as opposed to being merely a favorite), does it encourage learning, does it fit, does it operate at the appropriate level of inquiry, and finally what can science teachers do to integrate these activities into the classroom curriculum more effectively? Simply stated, what can teachers do so that students understand what to do, what's the point, and how that point fits into what they are learning outside the laboratory?
Expanding perspective on music therapy for symptom management in cancer care.
Potvin, Noah; Bradt, Joke; Kesslick, Amy
2015-01-01
Symptom management is a frequently researched treatment topic in music therapy and cancer care. Representations in the literature of music interventions for symptom management, however, have often overlooked the human experiences shaping those symptoms. This may result in music therapy being perceived as a linear intervention process that does not take into account underlying experiences that contribute to symptom experiences. This study explored patient experiences underlying symptoms and symptom management in cancer care, and examined the role of music therapy in that clinical process. This study analyzed semi-structured, open-ended exit interviews obtained from 30 participants during a randomized controlled trial investigating the differential impact of music therapy versus music medicine interventions on symptom management in participants with cancer. Interviews were conducted by a research assistant not involved with the clinical interventions. Exit interview transcripts for 30 participants were analyzed using an inductive, latent, constructivist method of thematic analysis. Three themes-Relaxation, Therapeutic relationship, and Intrapersonal relating-capture elements of the music therapy process that (a) modified participants' experiences of adjustments in their symptoms and (b) highlighted the depth of human experience shaping their symptoms. These underlying human experiences naturally emerged in the therapeutic setting, requiring the music therapist's clinical expertise for appropriate support. Symptom management extends beyond fluctuation in levels and intensity of a surface-level symptom to incorporate deeper lived experiences. The authors provide recommendations for clinical work, entry-level training as related to symptom management, implications for evidence-based practice in music therapy, and methodology for future mixed methods research. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Design of experiments on 135 cloned poplar trees to map environmental influence in greenhouse.
Pinto, Rui Climaco; Stenlund, Hans; Hertzberg, Magnus; Lundstedt, Torbjörn; Johansson, Erik; Trygg, Johan
2011-01-31
To find and ascertain phenotypic differences, minimal variation between biological replicates is always desired. Variation between the replicates can originate from genetic transformation but also from environmental effects in the greenhouse. Design of experiments (DoE) has been used in field trials for many years and proven its value but is underused within functional genomics including greenhouse experiments. We propose a strategy to estimate the effect of environmental factors with the ultimate goal of minimizing variation between biological replicates, based on DoE. DoE can be analyzed in many ways. We present a graphical solution together with solutions based on classical statistics as well as the newly developed OPLS methodology. In this study, we used DoE to evaluate the influence of plant specific factors (plant size, shoot type, plant quality, and amount of fertilizer) and rotation of plant positions on height and section area of 135 cloned wild type poplar trees grown in the greenhouse. Statistical analysis revealed that plant position was the main contributor to variability among biological replicates and applying a plant rotation scheme could reduce this variation. Copyright © 2010 Elsevier B.V. All rights reserved.
Burke, J M; Soli, F; Miller, J E; Terrill, T H; Wildeus, S; Shaik, S A; Getz, W R; Vanguru, M
2010-03-25
Widespread anthelmintic resistance in small ruminants has necessitated alternative means of gastrointestinal nematode (GIN) control. The objective was to determine the effectiveness of copper oxide wire particles (COWP) administered as a gelatin capsule or in a feed supplement to control GIN in goats. In four separate experiments, peri-parturient does (n=36), yearling does (n=25), weaned kids (n=72), and yearling bucks (n=16) were randomly assigned to remain untreated or administered 2g COWP in a capsule (in Experiments 1, 2, and 3) or feed supplement (all experiments). Feces and blood were collected every 7 days between Days 0 and 21 (older goats) or Day 42 (kids) for fecal egg counts (FEC) and blood packed cell volume (PCV) analyses. A peri-parturient rise in FEC was evident in the untreated does, but not the COWP-treated does (COWP x date, P<0.02). In yearling does, FEC of the COWP-treated does tended to be lower than the untreated (COWP, P<0.02). FEC of COWP-treated kids were reduced compared with untreated kids (COWP x date, P<0.001). FEC of treated and untreated bucks were similar, but Haemonchus contortus was not the predominant nematode in these goats. However, total worms were reduced in COWP-fed bucks (P<0.03). In summary, it appeared that COWP in the feed was as effective as COWP in a gelatin capsule to reduce FEC in goats. COWP administration may have a limited effect where H. contortus is not the predominant nematode.
Optimizing an experimental design for an electromagnetic experiment
NASA Astrophysics Data System (ADS)
Roux, Estelle; Garcia, Xavier
2013-04-01
Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.
Capsule endoscopy in pediatrics: a 10-years journey.
Oliva, Salvatore; Cohen, Stanley A; Di Nardo, Giovanni; Gualdi, Gianfranco; Cucchiara, Salvatore; Casciani, Emanuele
2014-11-28
Video capsule endoscopy (CE) for evaluation the esophagus (ECE), small bowel (SBCE) and the colon (CCE) is particularly useful in pediatrics, because this imaging modality does not require ionizing radiation, deep sedation or general anesthesia. The risk of capsule retention appears to be dependent on indication rather than age and parallels the adult experience by indication, making SBCE a relatively safe procedure with a significant diagnostic yield. The newest indication, assessment of mucosal change, greatly enhances and expands its potential benefit. The diagnostic role of CE extends beyond the SB. The use of ECE also may enhance our knowledge of esophageal disease and assist patient care. Colon CCE is a novel minimally invasive and painless endoscopic technique allowing exploration of the colon without need for sedation, rectal intubation and gas insufflation. The limited data on ECE and CCE in pediatrics does not yet allow the same conclusions regarding efficacy; however, both appear to provide safe methods to assess and monitor mucosal change in their respective areas with little discomfort. Moreover, although experience has been limited, the patency capsule may help lessen the potential of capsule retention; and newly researched protocols for bowel cleaning may further enhance CE's diagnostic yield. However, further research is needed to optimize the use of the various CE procedures in pediatric populations.
Chiou, Rocco; Rich, Anina N; Rogers, Sebastian; Pearson, Joel
2018-08-01
Individuals with grapheme-colour synaesthesia experience anomalous colours when reading achromatic text. These unusual experiences have been said to resemble 'normal' colour perception or colour imagery, but studying the nature of synaesthesia remains difficult. In the present study, we report novel evidence that synaesthetic colour impacts conscious vision in a way that is different from both colour perception and imagery. Presenting 'normal' colour prior to binocular rivalry induces a location-dependent suppressive bias reflecting local habituation. By contrast, a grapheme that evokes synaesthetic colour induces a facilitatory bias reflecting priming that is not constrained to the inducing grapheme's location. This priming does not occur in non-synaesthetes and does not result from response bias. It is sensitive to diversion of visual attention away from the grapheme, but resistant to sensory perturbation, reflecting a reliance on cognitive rather than sensory mechanisms. Whereas colour imagery in non-synaesthetes causes local priming that relies on the locus of imagined colour, imagery in synaesthetes caused global priming not dependent on the locus of imagery. These data suggest a unique psychophysical profile of high-level colour processing in synaesthetes. Our novel findings and method will be critical to testing theories of synaesthesia and visual awareness. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Arafa, Mona G.; Ayoub, Bassam M.
2017-01-01
Niosomes entrapping pregabalin (PG) were prepared using span 60 and cholesterol in different molar ratios by hydration method, the remaining PG from the hydrating solution was separated from vesicles by freeze centrifugation. Optimization of nano-based carrier of pregabalin (PG) was achieved. Quality by Design strategy was successfully employed to obtain PG-loaded niosomes with the desired properties. The optimal particle size, drug release and entrapment efficiency were attained by Minitab® program using design of experiment (DOE) that predicted the best parameters by investigating the combined effect of different factors simultaneously. Pareto chart was used in the screening step to exclude the insignificant variables while response surface methodology (RSM) was used in the optimization step to study the significant factors. Best formula was selected to prepare topical hydrogels loaded with niosomal PG using HPMC and Carbopol 934. It was verified, by means of mechanical and rheological tests, that addition of the vesicles to the gel matrix affected significantly gel network. In vitro release and ex vivo permeation experiments were carried out. Delivery of PG molecules followed a Higuchi, non Fickian diffusion. The present work will be of interest for pharmaceutical industry as a controlled transdermal alternative to the conventional oral route.
Arafa, Mona G.; Ayoub, Bassam M.
2017-01-01
Niosomes entrapping pregabalin (PG) were prepared using span 60 and cholesterol in different molar ratios by hydration method, the remaining PG from the hydrating solution was separated from vesicles by freeze centrifugation. Optimization of nano-based carrier of pregabalin (PG) was achieved. Quality by Design strategy was successfully employed to obtain PG-loaded niosomes with the desired properties. The optimal particle size, drug release and entrapment efficiency were attained by Minitab® program using design of experiment (DOE) that predicted the best parameters by investigating the combined effect of different factors simultaneously. Pareto chart was used in the screening step to exclude the insignificant variables while response surface methodology (RSM) was used in the optimization step to study the significant factors. Best formula was selected to prepare topical hydrogels loaded with niosomal PG using HPMC and Carbopol 934. It was verified, by means of mechanical and rheological tests, that addition of the vesicles to the gel matrix affected significantly gel network. In vitro release and ex vivo permeation experiments were carried out. Delivery of PG molecules followed a Higuchi, non Fickian diffusion. The present work will be of interest for pharmaceutical industry as a controlled transdermal alternative to the conventional oral route. PMID:28134262
The use of bisphosphonates does not contraindicate orthodontic and other types of treatment!
Consolaro, Alberto
2014-01-01
Bisphosphonates have been increasingly used not only to treat bone diseases as well as conditions such as osteopenia and osteoporosis, but also in oncotherapy. The use of bisphosphonates induces clinicians to fear and care. These reactions are associated with controversy resulting from lack of in-depth knowledge on the mechanisms of action as well as lack of a more accurate assessment of side effects. Scientific and clinical knowledge disclosure greatly contributes to professionals' discernment and inner balance, especially orthodontists. Fear does not lead to awareness. For these reasons, we present an article that focuses on that matter. This article was adapted from different journals of different dental specialties, as mentioned on footnote. There is no scientific evidence demonstrating that bisphosphonates are directly involved with etiopathogenic mechanisms of osteonecrosis and jaw osteomyelitis. Their use is contraindicated and limited in cases of dental treatment involving bone tissue. Nevertheless, such fact is based on professional opinion, case reports, and personal experience or experiment trials with failing methods. Additional studies will always be necessary; however, in-depth knowledge on bone biology is of paramount importance to offer an opinion about the clinical use of bisphosphonates and their further implications. Based on bone biopathology, this article aims at contributing to lay the groundwork for this matter. PMID:25279517
Kim, Nam Ah; An, In Bok; Lee, Sang Yeol; Park, Eun-Seok; Jeong, Seong Hoon
2012-09-01
In this study, the structural stability of hen egg white lysozyme in solution at various pH levels and in different types of buffers, including acetate, phosphate, histidine, and Tris, was investigated by means of differential scanning calorimetry (DSC). Reasonable pH values were selected from the buffer ranges and were analyzed statistically through design of experiment (DoE). Four factors were used to characterize the thermograms: calorimetric enthalpy (ΔH), temperature at maximum heat flux (T( m )), van't Hoff enthalpy (ΔH( V )), and apparent activation energy of protein solution (E(app)). It was possible to calculate E(app) through mathematical elaboration from the Lumry-Eyring model by changing the scan rate. The transition temperature of protein solution, T( m ), increased when the scan rate was faster. When comparing the T( m ), ΔH( V ), ΔH, and E(app) of lysozyme in various pH ranges and buffers with different priorities, lysozyme in acetate buffer at pH 4.767 (scenario 9) to pH 4.969 (scenario 11) exhibited the highest thermodynamic stability. Through this experiment, we found a significant difference in the thermal stability of lysozyme in various pH ranges and buffers and also a new approach to investigate the physical stability of protein by DoE.
The cloud radiation impact from optics simulation and airborne observation
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Kuznetsov, Anatoly; Gatebe, Charles
2017-02-01
The analytical approach of inverse asymptotic formulas of the radiative transfer theory is used for solving inverse problems of cloud optics. The method has advantages because it does not impose strict constraints, but it is tied to the desired solution. Observations are accomplished in extended stratus cloudiness, above a homogeneous ocean surface. Data from NASA`s Cloud Absorption Radiometer (CAR) during two airborne experiments (SAFARI-2000 and ARCTAS-2008) were analyzed. The analytical method of inverse asymptotic formulas was used to retrieve cloud optical parameters (optical thickness, single scattering albedo and asymmetry parameter of the phase function) and ground albedo in all 8 spectral channels independently. The method is free from a priori restrictions and there is no links to parameters, and it has been applied to data set of different origin and geometry of observations. Results obtained from different airborne, satellite and ground radiative experiments appeared consistence and showed common features of values of cloud parameters and its spectral dependence (Vasiluev, Melnikova, 2004; Gatebe et al., 2014). Optical parameters, retrieved here, are used for calculation of radiative divergence, reflected and transmitted irradiance and heating rates in cloudy atmosphere, that agree with previous observational data.
Furnham, A
2000-12-01
This study looked at the relationship between ratings of the perceived effectiveness of 24 methods for telling the future, 39 complementary therapies (CM) and 12 specific attitude statements about science and medicine. A total of 159 participants took part. The results showed that the participants were deeply sceptical of the effectiveness of the methods for telling the future which factored into meaningful and interpretable factors. Participants were much more positive about particular, but not all, specialties of complementary medicine (CM). These also factored into a meaningful factor structure. Finally, the 12 attitude to science/medicine statements revealed four factors: scepticism of medicine; the importance of psychological factors; patient protection; and the importance of scientific evaluation. Regressional analysis showed that belief in the total effectiveness of different ways of predicting the future was best predicted by beliefs in the effectiveness of the CM therapies. Although interest in the occult was associated with interest in CM, participants were able to distinguish between the two, and displayed scepticism about the effectiveness of methods of predicting the future and some CM therapies. Copyright 2000 Harcourt Publishers Ltd.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
Preferred skin color enhancement for photographic color reproduction
NASA Astrophysics Data System (ADS)
Zeng, Huanzhao; Luo, Ronnier
2011-01-01
Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and does not objectionably affect preferred skin colors in original images.
NASA Astrophysics Data System (ADS)
Deb, Pradip
2010-07-01
As a fundamental basis of all natural science and technology, Physics is the key subject in many science teaching institutions around the world. Physics teaching and learning is the most important issue today—because of its complexity and fast growing applications in many new fields. The laws of Physics are global—but teaching and learning methods of Physics are very different among countries and cultures. When I first came in Australia for higher education about 11 years ago with an undergraduate and a graduate degree in Physics from a university of Bangladesh, I found the Physics education system in Australia is very different to what I have experienced in Bangladesh. After having two graduate degrees from two Australian universities and gaining few years experience in Physics teaching in Australian universities, I compare the two different types of Physics education experiences in this paper and tried to find the answer of the question—does it all depend on the resources or internal culture of the society or both. Undergraduate and graduate level Physics syllabi, resources and teaching methods, examination and assessment systems, teacher-student relationships, and research cultures are discussed and compared with those in Australia.
Effectiveness and value of massage skills training during pre-registration nurse education.
Cook, Neal F; Robinson, Jacqueline
2006-10-01
The integration of Complementary and alternative medicine (CAM) interventions into healthcare practices is becoming more popular and frequently accessed by patients. Various disciplines have integrated CAM techniques education into the preparation of their practitioners in response to this, but this varies widely, as does its success. Students'experiences of such education in pre-registration is largely unknown in the UK, and methods by which to successful achieve effective learning within this arena are largely unreported within the literature. This study highlighted three specifics aims; to examine the perspectives of pre-registration nursing students on being taught massage skills during pre-registration nurse education; to identify the learning and development that occurs during massage skills training; and to identify methods of enhancing the provision of such skills training and its experience. This paper demonstrates the value of integrating complementary therapies into nurse education, developing the holistic approach of student nurses and their concept of caring. In addition it contributes significantly to the knowledge base of the effectiveness of the value of CAM education in nurse preparation, highlighting the high value students place on CAM education and demonstrating notable development in the preparation of holistic practitioners. The method utilised also yielded ways to improve the delivery of such education, and demonstrates how creative teaching methods can motivate and enhance effective learning.
Modeling Adaptive Educational Methods with IMS Learning Design
ERIC Educational Resources Information Center
Specht, Marcus; Burgos, Daniel
2007-01-01
The paper describes a classification system for adaptive methods developed in the area of adaptive educational hypermedia based on four dimensions: What components of the educational system are adapted? To what features of the user and the current context does the system adapt? Why does the system adapt? How does the system get the necessary…
Science Teaching Efficacy Beliefs and the Lived Experience of Preservice Elementary Teachers
NASA Astrophysics Data System (ADS)
Kettler, Karen A.
The current study utilized a mixed methods approach to examine the science teaching efficacy beliefs (STEB) of preservice elementary teachers as they participated in a Science Methods course. The following questions were addressed using quantitative survey data and qualitative interviews: What are the STEB of preservice elementary teachers as they progress through a Science Methods course?; How do the STEB of preservice elementary teachers with higher and lower personal science teaching efficacy (PSTE) beliefs change as they progress through a Science Methods course?; What is the nature of the lived experiences of preservice elementary teachers with higher and lower PSTE beliefs as they progress through a Science Methods course?; and How does the meaning developed during the lived experience of preservice elementary teachers with higher and lower PSTE beliefs influence their STEB? The participants (n = 21) included preservice elementary teachers registered for a Science Methods course as part of the "Block" semester, during their final year of teacher preparation prior to the student teaching experience. Quantitative data was obtained via Science Teaching Efficacy Belief Instrument- form B (STEBI-B) surveys taken at the beginning and end of the Science Methods course. This data was utilized to categorize participants into low, medium, and high efficacy groups, depending on how they scored in relation to one another. Qualitative data was obtained concurrently, through in-depth interviews with four "lower" efficacy participants and four "higher" efficacy participants, and was conducted after the "pre" survey and before the "post" survey, utilizing transcendental phenomenological methodology. Results showed a significant difference between pre- and post- survey data, indicating that the participants, as a whole, experienced an increase in PSTE during the Science Methods course (p<0.001). An examination of the specific subgroups (low, medium, and high efficacy) show a significant difference between the pre- and post- PSTE scores for individuals with low (p = 0.005) and medium (p = 0.004) efficacy, but not those with high efficacy (p = 0.184). The phenomenological interview data revealed five themes with regard to the experience of those with lower and higher efficacy: The power of realistic learning experiences, informal field experiences; The power of authentic teaching experiences; Modeling, the second-hand experience; The necessity of forming relationships; and Assessments and feedback as meaningful work. The composite textural descriptions of interview data revealed that while low efficacy participants found the course "boring" and "repetitive," and they found the assessments and feedback ineffectual, they enjoyed specific aspects of the course, including the field and teaching experiences, as they were more receptive to these experiences. The structural descriptions of the low efficacy participants revealed that their previous negative experiences with science educators impacted their perceptions of their experiences in the course and their beliefs about science education. The high efficacy participants found the activities in the course to be "frustrating," "random," and "pointless," as these individuals had experienced similar activities during previous science courses. Because the high efficacy participants had had generally positive previous experiences with science education and had high expectations for both the Science Methods course and the teacher, they were extremely critical of the course and were less receptive to learning during course activities. The overall essence of the experience for both efficacy groups was a need for connectedness with the science content, the assessments, the elementary students, and the teacher of the course.
NASA Technical Reports Server (NTRS)
Phillips, Warren F.
1989-01-01
The results obtained show that it is possible to control light-weight robots with flexible links in a manner that produces good response time and does not induce unacceptable link vibrations. However, deflections induced by gravity cause large static position errors with such a control system. For this reason, it is not possible to use this control system for controlling motion in the direction of gravity. The control system does, on the other hand, have potential for use in space. However, in-space experiments will be needed to verify its applicability to robots moving in three dimensions.
NASA Astrophysics Data System (ADS)
Tschauner, O.; Asimow, P. D.; Ahrens, T. J.; Kostandova, N.; Sinogeikin, S.
2007-12-01
We report the first observation of the high-pressure silicate phase wadsleyite in the recovery products of a shock experiment. Wadsleyite was detected by micro-X ray diffraction and EBSD. Wadsleyite grew from melt which formed by chemical reaction of periclase and silica during shock. Our findings show that the growth rate of high pressure silicate phases in shock-generated melts can be of the order of m/s and is probably not diffusion controlled. Our finding has important implications for the time scale of shock events recorded by meteorites and indicates that the presence of high pressure silicates found in shocked meteorites does not necessarily imply large impactor sizes. This work was supported by the NNSA Cooperative Agreement DOE-FC88-01NV14049 and NASA/Goddard grants under awards NNG04GP57G and NNG04GI07G. Use of the HPCAT facility was supported by DOE-BES, DOE-NNSA, NSF, DOD -TACOM, and the W.M. Keck Foundation. APS is supported by DOE-BES under Contract No. W-31-109-Eng-38.
Choe, Seungho; Hecht, Karen A.; Grabe, Michael
2008-01-01
Continuum electrostatic approaches have been extremely successful at describing the charged nature of soluble proteins and how they interact with binding partners. However, it is unclear whether continuum methods can be used to quantitatively understand the energetics of membrane protein insertion and stability. Recent translation experiments suggest that the energy required to insert charged peptides into membranes is much smaller than predicted by present continuum theories. Atomistic simulations have pointed to bilayer inhomogeneity and membrane deformation around buried charged groups as two critical features that are neglected in simpler models. Here, we develop a fully continuum method that circumvents both of these shortcomings by using elasticity theory to determine the shape of the deformed membrane and then subsequently uses this shape to carry out continuum electrostatics calculations. Our method does an excellent job of quantitatively matching results from detailed molecular dynamics simulations at a tiny fraction of the computational cost. We expect that this method will be ideal for studying large membrane protein complexes. PMID:18474636
A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain
NASA Astrophysics Data System (ADS)
Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.
2018-05-01
The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.
A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction
NASA Astrophysics Data System (ADS)
Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo
Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
NASA Astrophysics Data System (ADS)
Bulunuz, Mizrap
Inquiry-based science instruction is a major goal of science education reform. However, there is little research examining how preservice elementary teachers might be motivated to teach through inquiry. This quantitative study was designed to examine the role of background experiences and an inquiry science methods course on interest in science and interest in teaching science. The course included many activities and assignments at varying levels of inquiry, designed to teach content and inquiry methods and to model effective teaching. The study involved analyses of surveys completed by students in the course on their experiences with science before, during, and at the end of the course. The following questions guided the design of this study and analysis of the data: (1) What science background experiences (school, home, and informal education) do participants have and how do those experiences affect initial interest in science? (2) Among the hands-on activities in the methods course, is there a relationship between level of inquiry of the activity and the motivational quality (interesting, fun, and learning) of the activity? (3) Does the course affect participants' interest and attitude toward science? (4) What aspects of the course contribute to participants' interest in teaching science and choice to teach science? Descriptive and inferential analysis of a background survey revealed that participants with high and low initial interest in science differed significantly on remembering about elementary school science and involvement in science related activities in childhood/youth. Analysis of daily ratings of each hands-on activity on motivational qualities (fun, interest, and learning) indicated that there were significant differences in motivational quality of the activities by level of inquiry with higher levels of inquiry rated more positively. Pre/post surveys indicated that participants increased in interest in science and a number of variables reflecting more positive feelings about science and science teaching. Regression analysis found that the best predictors for interest in teaching science were experiencing fun activities in the science methods course followed by the interest participants brought to the course. This study highlights the motivational aspects of the methods course in developing interest in science and interest in teaching science.
Final Report: Archiving Data to Support Data Synthesis of DOE Sponsored Elevated CO 2 Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Megonigal, James; Lu, Meng
Over the last three decades DOE made a large investment in field-scale experiments in order to understand the role of terrestrial ecosystems in the global carbon cycle, and forecast how carbon cycling will change over the next century. The Smithsonian Environmental Research Center received one of the first awards in this program and managed two long-term studies (25 years and 10 years) with a total of approximately $10 million of support from DOE, and many more millions leveraged from the Smithsonian Institution and agencies such as NSF. The present DOE grant was based on the premise that such a largemore » investment demands a proper synthesis effort so that the full potential of these experiments are realized through data analysis and modeling. The goal of the this grant was to archive legacy data from two major elevated carbon dioxide experiments in DOE databases, and to engage in synthesis activities using these data. Both goals were met. All datasets deemed a high priority for data synthesis and modeling were prepared for archiving and analysis. Many of these datasets were deposited in DOE’s CDIAC, while others are being held at the Oak Ridge National Lab and the Smithsonian Institution until they can be received by DOE’s new ESS-DIVE system at Berkeley Lab. Most of the effort was invested in researching and re-constituting high-quality data sets from a 30-year elevated CO 2 experiment. Using these data, the grant produced products that are already benefiting climate change science, including the publication of new coastal wetland allometry equations based on 9,771 observations, public posting of dozens of datasets, metadata and supporting codes from long-term experiments at the Global Change Research Wetland, and publication of two synthetic data papers on scrub oak forest responses to elevated CO 2. In addition, three papers are in review or nearing submission reporting unexpected long-term patterns in ecosystem responses to elevated CO 2 and nitrogen in a coastal wetland.« less
Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy
2017-06-26
We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.
Inductive Learning: Does Interleaving Exemplars Affect Long-Term Retention?
ERIC Educational Resources Information Center
Zulkiply, Norehan; Burt, Jennifer S.
2013-01-01
Purpose: The present study investigated whether or not the benefits of interleaving of exemplars from several categories vary with retention interval in inductive learning. Methodology: Two experiments were conducted using paintings (Experiment 1) and textual materials (Experiment 2), and the experiments used a mixed factorial design. Forty…
Does clinical experience affect the reproducibility of cervical vertebrae maturation method?
Rongo, Roberto; Valleta, Rosa; Bucci, Rosaria; Bonetti, Giulio Alessandri; Michelotti, Ambrosina; D'Antò, Vincenzo
2015-09-01
To assess interobserver and intraobserver reproducibility of the cervical vertebrae maturation method (CVMM) among three panels of judges with different levels of orthodontic experience (OE). Fifty individual lateral cephalograms of good quality with complete visualization of cervical vertebrae 1 to 4 were selected. Thirty clinicians, divided according to their OE into three groups (junior group, JU, OE ≤ 1 year; postgraduate group, PG, 2 ≤ OE ≤ 4 years; specialist group, SP, OE ≥ 7 years), evaluated the cephalograms in two sessions (T1 and T2) at 3 weeks apart. Kendall's W and weighted Cohen's kappa (κ) coefficients were performed to assess interobserver and intraobserver agreement. The level of significance was set as P < .05. For both the interobserver and the intraobserver datasets, the percentage of perfect agreement (PPA) and the number of stages apart for each disagreement were calculated. Kendall's W at T1 was SP = 0.61, PG = 0.70, and JU = 0.87; at T2 it was SP = 0.78, PG = 0.85, and JU = 0.86. The percentage of total interobserver perfect agreement (Inter-PPA) was 42.3% at T1 and 46.3% at T2. The JU group had the highest Cohen's κ coefficient at 0.78, while the PG and SP had coefficients of 0.64 each. The percentage of total intraobserver perfect agreement (Intra-PPA) was 54.2%. The reproducibility of the method was not improved by the level of orthodontic experience. The group with the lowest level of orthodontic experience had the best performance.
The morbidity experience of air traffic control personnel, 1967-1977.
DOT National Transportation Integrated Search
1978-04-01
The morbidity experience of 28,086 air traffic controllers has been examined from 1967-77 with particular emphasis given the potential effects of job demands on ATC Health. The morbidity experience of air traffic controllers does not appear excessive...
[The contact-free control of the functional state: the experience of its practical use].
Frolov, M V
1992-01-01
Application of contactless control method does not create any psychological or physical discomfort for a man and allows to realize diagnostics continuously for a long time, evidently or secretly. These properties of the control determine its effective application in practice. In the paper the data are given of the studies of the functional state of a man-operator using the parameters of the eyelids movements contactlessly recorded in infra-red rays, and the results of diagnostics of patients with depression by the characteristics of their speech, recorded from the microphone. The above data are obtained in practice.
Optical sampling by laser cavity tuning.
Hochrein, Thomas; Wilk, Rafal; Mei, Michael; Holzwarth, Ronald; Krumbholz, Norman; Koch, Martin
2010-01-18
Most time-resolved optical experiments rely either on external mechanical delay lines or on two synchronized femtosecond lasers to achieve a defined temporal delay between two optical pulses. Here, we present a new method which does not require any external delay lines and uses only a single femtosecond laser. It is based on the cross-correlation of an optical pulse with a subsequent pulse from the same laser. Temporal delay between these two pulses is achieved by varying the repetition rate of the laser. We validate the new scheme by a comparison with a cross-correlation measurement carried out with a conventional mechanical delay line.
Symmetry structure in neutron deficient xenon nuclei
NASA Astrophysics Data System (ADS)
Govil, I. M.
1998-12-01
The paper describes the measurements of the lifetimes of the excited states in the ground state band of the Neutron deficient Xe nuclei (122,124Xe) by recoil Distance Method (RDM). The lifetimes of the 2+ state in 122Xe agrees with the RDM measurements but for 124Xe it does not agree the RDM measurements but agrees with the earlier Coulomb-excitation experiment. The experimental results are compared with the existing theories to understand the changes in the symmetry structure of the Xe-nuclei as the Neutron number decreases from N=76(130Xe) to N=64(118Xe).
International Land Model Benchmarking (ILAMB) Workshop Report, Technical Report DOE/SC-0186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Kappel-Aleks, Gretchen
2016-11-01
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.
Kormány, Róbert; Molnár, Imre; Fekete, Jenő
2017-02-20
An older method for terazosin was reworked in order to reduce the analysis time from 90min (2×45min) to below 5min. The method in European Pharmacopoeia (Ph.Eur.) investigates the specified impurities separately. The reason of the different methods is that the retention of two impurities is not adequate in reversed phase, not even with 100% water. Therefore ion-pair-chromatography has to be applied and since that two impurities absorb at low UV-wavelength they had to be analyzed by different method than the other specified impurities. In our new method we could improve the retention with pH elevation using a new type of stationary phases available for high pH applications. Also a detection wavelength could be selected that is appropriate for the detection and quantification of all impurities. The method development is the bottleneck of liquid chromatography even today, when more and more fast chromatographic systems are used. Expert knowledge with intelligent programs is available to reduce the time of method development and offer extra information about the robustness of the separation. Design of Experiments (DoE) for simultaneous optimization of gradient time (t G ), temperature (T) and ternary eluent composition (t C ) requires 12 experiments. A good alternative way to identify a certain peak in different chromatograms is the molecular mass of the compound, due to its high specificity. Liquid Chromatography-Mass Spectrometry (LC-MS) is now a routine technique and increasingly available in laboratories. In our experiment for the resolution- and retention modeling the DryLab4 method development software (Version 4.2) was used. In recent versions of the software the use of (m/z)-MS-data is possible along the UV-peak-area-tracking technology. The modelled and measured chromatograms showed excellent correlations. The average retention time deviations were ca. 0.5s and there was no difference between the predicted and measured R s,crit -values. Copyright © 2016. Published by Elsevier B.V.
Camouflaging endothelial cells: does it prolong graft survival?
Stuhlmeier, K M; Lin, Y
1999-08-05
Camouflaging antigens on the surface of cells seems an appealing way to prevent activation of the immune system. We explored the possibility of preventing hyperacute rejection by chemically camouflaging endothelial cells (EC). In vitro as well as in vivo experiments were performed. First, the ability of mPEG coating to prevent antibody-antigen interactions was evaluated. Second, we tested the degree to which mPEG coating prevents activation of EC by stimuli such as TNF-alpha and LPS. Third, in vivo experiments were performed to test the ability of mPEG coating to prolong xenograft survival. We demonstrate that binding of several antibodies to EC or serum proteins can be inhibited by mPEG. Furthermore, binding of TNF-alpha as well as LPS to EC is blocked since mPEG treatment of EC inhibits the subsequent up-regulation of E-selectin by these stimuli. However, in vivo experiments revealed that currently this method alone is not sufficient to prevent hyperacute rejection.
Meta-analysis of field experiments shows no change in racial discrimination in hiring over time.
Quillian, Lincoln; Pager, Devah; Hexel, Ole; Midtbøen, Arnfinn H
2017-10-10
This study investigates change over time in the level of hiring discrimination in US labor markets. We perform a meta-analysis of every available field experiment of hiring discrimination against African Americans or Latinos ( n = 28). Together, these studies represent 55,842 applications submitted for 26,326 positions. We focus on trends since 1989 ( n = 24 studies), when field experiments became more common and improved methodologically. Since 1989, whites receive on average 36% more callbacks than African Americans, and 24% more callbacks than Latinos. We observe no change in the level of hiring discrimination against African Americans over the past 25 years, although we find modest evidence of a decline in discrimination against Latinos. Accounting for applicant education, applicant gender, study method, occupational groups, and local labor market conditions does little to alter this result. Contrary to claims of declining discrimination in American society, our estimates suggest that levels of discrimination remain largely unchanged, at least at the point of hire.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodnarczuk, M.
In this paper, I describe a conceptual framework that uses DOE Order 5700.6C and more than 140 other DOE Orders as an integrated management system -- but I describe it within the context of the broader sociological and cultural issues of doing research at DOE funded facilities. The conceptual framework has two components. The first involves an interpretation of the 10 criteria of DOE 5700.6C that is tailored for a research environment. The second component involves using the 10 criteria as functional categories that orchestrate and integrate the other DOE Orders into a total management system. The Fermilab approach aimsmore » at reducing (or eliminating) the redundancy and overlap within the DOE Orders system at the contractor level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodnarczuk, M.
In this paper, I describe a conceptual framework that uses DOE Order 5700.6C and more than 140 other DOE Orders as an integrated management system -- but I describe it within the context of the broader sociological and cultural issues of doing research at DOE funded facilities. The conceptual framework has two components. The first involves an interpretation of the 10 criteria of DOE 5700.6C that is tailored for a research environment. The second component involves using the 10 criteria as functional categories that orchestrate and integrate the other DOE Orders into a total management system. The Fermilab approach aimsmore » at reducing (or eliminating) the redundancy and overlap within the DOE Orders system at the contractor level.« less
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
Recoil distance lifetime measurements in 122,124Xe
NASA Astrophysics Data System (ADS)
Govil, I. M.; Kumar, A.; Iyer, H.; Li, H.; Garg, U.; Ghugre, S. S.; Johnson, T.; Kaczarowski, R.; Kharraja, B.; Naguleswaran, S.; Walpe, J. C.
1998-02-01
Lifetimes of the lower-excited states in 122,124Xe are measured using the recoil-distance Doppler-shift technique. The reactions 110Pd(16O,4n)122Xe and 110Pd(18O,4n)124Xe at a beam energy of 66 MeV were used for this experiment. The lifetimes of the 2+, 4+, 6+, and 8+ states of the ground state band were extracted using the computer code LIFETIME including the corrections due to the side feeding and the nuclear deorientation effects. The lifetime of the 2+ state in 122Xe agrees with the recoil distance method (RDM) measurements but for the 124Xe it does not agree with the RDM measurements but agrees with the Coulomb-excitation experiment. The measured B(E2) values for both the nuclei are compared with the standard algebraic and the multishell models.
Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.; Wang, C. W.
1986-01-01
Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, we design an experiment and solve the same problem under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, we present results of practical, rather than statistical, significance. Our implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than our implementation of a gradient search procedure does with our objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.
Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.
1986-01-01
Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, an experiment is designed and the same problem is solved under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, results of practical, rather than statistical, significance are presented. Implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than implementation of a gradient search procedure does with the objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.
On the micro-indentation of plant cells in a tissue context
NASA Astrophysics Data System (ADS)
Mosca, Gabriella; Sapala, Aleksandra; Strauss, Soeren; Routier-Kierzkowska, Anne-Lise; Smith, Richard S.
2017-02-01
The effect of geometry on cell stiffness measured with micro-indentation techniques has been explored in single cells, however it is unclear if results on single cells can be readily transferred to indentation experiments performed on a tissue in vivo. Here we explored this question by using simulation models of osmotic treatments and micro-indentation experiments on 3D multicellular tissues with the finite element method. We found that the cellular context does affect measured cell stiffness, and that several cells of context in each direction are required for optimal results. We applied the model to micro-indentation data obtained with cellular force microscopy on the sepal of A. thaliana, and found that differences in measured stiffness could be explained by cellular geometry, and do not necessarily indicate differences in cell wall material properties or turgor pressure.
NASA Astrophysics Data System (ADS)
Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence
2017-06-01
We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.
DOT National Transportation Integrated Search
2016-12-01
This study set out to examine the following diverse questions regarding cooperative adaptive cruise control (CACC) use: - Does CACC reduce driver workload relative to manual gap control? - Does CACC increase the probability of driver distraction rela...
Adaptive design of visual perception experiments
NASA Astrophysics Data System (ADS)
O'Connor, John D.; Hixson, Jonathan; Thomas, James M., Jr.; Peterson, Matthew S.; Parasuraman, Raja
2010-04-01
Meticulous experimental design may not always prevent confounds from affecting experimental data acquired during visual perception experiments. Although experimental controls reduce the potential effects of foreseen sources of interference, interaction, or noise, they are not always adequate for preventing the confounding effects of unforeseen forces. Visual perception experimentation is vulnerable to unforeseen confounds because of the nature of the associated cognitive processes involved in the decision task. Some confounds are beyond the control of experimentation, such as what a participant does immediately prior to experimental participation, or the participant's attitude or emotional state. Other confounds may occur through ignorance of practical control methods on the part of the experiment's designer. The authors conducted experiments related to experimental fatigue and initially achieved significant results that were, upon re-examination, attributable to a lack of adequate controls. Re-examination of the original results and the processes and events that led to them yielded a second experimental design with more experimental controls and significantly different results. The authors propose that designers of visual perception experiments can benefit from planning to use a test-fix-test or adaptive experimental design cycle, so that unforeseen confounds in the initial design can be remedied.
A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments
NASA Astrophysics Data System (ADS)
Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue
2013-03-01
The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.
Shi, Lei; Wan, Youchuan; Gao, Xianjun
2018-01-01
In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
Measuring particle charge in an rf dusty plasma
NASA Astrophysics Data System (ADS)
Fung, Jerome; Liu, Bin; Goree, John; Nosenko, Vladimir
2004-11-01
A dusty plasma is an ionized gas containing micron-size particles of solid matter. A particle gains a large negative charge by collecting electrons and ions from the plasma. In a gas discharge, particles can be levitated by the sheath electric field above a horizontal planar electrode. Most dusty plasma experiments require a knowledge of the particle charge, which is a key parameter for all interactions with other particles and the plasma electric field. Several methods have been developed in the literature to measure the charge. The vertical resonance method uses Langmuir probe measurements of the ion density and video camera measurements of the amplitude of vertical particle oscillations, which are excited by modulating the rf voltage. Here, we report a new method that is a variation of the vertical resonance method. It uses the plasma potential and particle height, which can be measured more accurately than the ion density. We tested this method and compared the resulting charge to values obtained using the original resonance method as well as sound speed methods. Work supported by an NSF REU grant, NASA and DOE.
Smallwood, Jonathan; Nind, Louise; O'Connor, Rory C
2009-03-01
Two experiments employed experience sampling to examine the factors associated with a prospective and retrospective focus during mind wandering. Experiment One explored the contribution of working memory and indicated that participants generally prospect when the task does not require continuous monitoring. Experiment Two demonstrated that in the context of reading, interest in what was read suppressed both past and future-related task-unrelated-thought. Moreover, in disinterested individuals the temporal focus during mind wandering depended on the amount of experience with the topic matter-less experienced individuals tended to prospect, while more experienced individuals tended to retrospect. Together these results suggest that during mind wandering participants' are inclined to prospect as long as the task does not require their undivided attention and raise the intriguing possibility that autobiographical associations with the current task environment have the potential to cue the disinterested mind.
Does the knowledge of emergency contraception affect its use among high school adolescents?
Chofakian, Christiane Borges do Nascimento; Borges, Ana Luiza Vilela; Sato, Ana Paula Sayuri; Alencar, Gizelton Pereira; Santos, Osmara Alves dos; Fujimori, Elizabeth
2016-01-01
This study aimed to test how knowledge on emergency contraception (according to age at sexual initiation, type of school, and knowing someone that has already used emergency contraception) influences the method's use. This was a cross-sectional study in a probabilistic sample of students 15-19 years of age enrolled in public and private middle schools in a medium-sized city in Southeast Brazil (n = 307). Data were collected in 2011 using a self-administered questionnaire. A structural equations model was used for the data analysis. Considering age at sexual initiation and type of school, knowledge of emergency contraception was not associated with its use, but knowing someone that had used the method showed a significant mean effect on use of emergency contraception. Peer group conversations on emergency contraception appear to have greater influence on use of the method than knowledge itself, economic status, or sexual experience.
Application of the implicit MacCormack scheme to the PNS equations
NASA Technical Reports Server (NTRS)
Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.
1983-01-01
The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
NASA Astrophysics Data System (ADS)
Kuruliuk, K. A.; Kulesh, V. P.
2016-10-01
An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.
On an additive partial correlation operator and nonparametric estimation of graphical models.
Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu
2016-09-01
We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.
On an additive partial correlation operator and nonparametric estimation of graphical models
Li, Bing; Zhao, Hongyu
2016-01-01
Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarm, Samuel C.; Mburu, Sarah N.; Kolli, Ratna P.
Cast duplex stainless steel piping in light water nuclear reactors expe- rience thermal aging embrittlement during operational service. Interest in extending the operational life to 80 years requires an increased understanding of the microstructural evolution and corresponding changes in mechanical behavior. We analyze the evolution of the microstructure during thermal aging of cast CF-3 and CF-8 stainless steels using electron microscopy and atom probe tomography. The evolution of the mechanical properties is measured concurrently by mechanical methods such as tensile tests, Charpy V-notch tests, and instrumented nanoinden- tation. A microstructure-based finite element method model is developed and uti- lized inmore » conjunction with the characterization results in order to correlate the local stress-strain effects in the microstructure with the bulk measurements. This work is supported by the DOE Nuclear Energy University Programs (NEUP), contract number DE-NE0000724.« less
Silver, John R
2011-08-01
What's known on the subject? and What does the study add? Prior to the First World War, traumatic injuries to the spinal cord rapidly led to death from severe infections of the bladder. During the Second World War, Ludwig Guttmann resurrected the use of intermittent catheterisation at Stoke Mandeville Hospital, by meticulous attention to detail and was so successful, that this method was introduced into general urological practice. Historical review of the management of the bladder in patients with spinal injuries. Spinal injury patients--literature review--personal experience at Stoke Mandeville Hospital. Review of the different methods of catheterisation from the 19th century to today. Methods learned from the management of the bladder of spinal injuries patients were adopted into mainstream urology. © 2011 THE AUTHOR; BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
Guo, Feng; Zhou, Weijie; Li, Peng; Mao, Zhangming; Yennawar, Neela H; French, Jarrod B; Huang, Tony Jun
2015-06-01
Advances in modern X-ray sources and detector technology have made it possible for crystallographers to collect usable data on crystals of only a few micrometers or less in size. Despite these developments, sample handling techniques have significantly lagged behind and often prevent the full realization of current beamline capabilities. In order to address this shortcoming, a surface acoustic wave-based method for manipulating and patterning crystals is developed. This method, which does not damage the fragile protein crystals, can precisely manipulate and pattern micrometer and submicrometer-sized crystals for data collection and screening. The technique is robust, inexpensive, and easy to implement. This method not only promises to significantly increase efficiency and throughput of both conventional and serial crystallography experiments, but will also make it possible to collect data on samples that were previously intractable. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Make no mistake—errors can be controlled*
Hinckley, C
2003-01-01
Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods. PMID:14532368
Compensation of Horizontal Gravity Disturbances for High Precision Inertial Navigation
Cao, Juliang; Wu, Meiping; Lian, Junxiang; Cai, Shaokun; Wang, Lin
2018-01-01
Horizontal gravity disturbances are an important factor that affects the accuracy of inertial navigation systems in long-duration ship navigation. In this paper, from the perspective of the coordinate system and vector calculation, the effects of horizontal gravity disturbance on the initial alignment and navigation calculation are simultaneously analyzed. Horizontal gravity disturbances cause the navigation coordinate frame built in initial alignment to not be consistent with the navigation coordinate frame in which the navigation calculation is implemented. The mismatching of coordinate frame violates the vector calculation law, which will have an adverse effect on the precision of the inertial navigation system. To address this issue, two compensation methods suitable for two different navigation coordinate frames are proposed, one of the methods implements the compensation in velocity calculation, and the other does the compensation in attitude calculation. Finally, simulations and ship navigation experiments confirm the effectiveness of the proposed methods. PMID:29562653
Stabilisation of bank slopes that are prone to liquefaction in ecologically sensitive areas.
Nestler, P; Stoll, R D
2001-01-01
A consequence of lignite stripping in the Lusatia mining district (East Germany) is the backfilling of dumps that mainly consist of low-compacted fine and medium-grained sands. When the ground-water table, which had been lowered while stripping the coal, is rising again, these dumps might be affected by a settlement flow due to the liquefaction of soils. Common methods for stabilisation as, for instance, blasting or vibrator-jetting deep compaction, are not very useful in ecologically sensitive areas, where dumps have been afforested and embankment areas of residual lakes have developed into highly valuable biotopes. A new so-called air-impulse method in combination with directional horizontal drilling has been developed, which does not have a considerably negative impact on the vegetation during compaction. The experience gained during the first employment of this method at the lake "Katja", a residual lake of lignite stripping, is presented in this paper.
Vat rates on medical devices: foreign experience and Ukrainian practice.
Pashkov, Vitalii; Hutorova, Nataliia; Harkusha, Andrii
2017-01-01
In Ukraine differentiated VAT rates is a matter of debate. Today the Cabinet approved a list of medical products that has been changed three times resulting in changed VAT rates for specific products. European Union provides another method of regulation of VAT rates on medical devices. The abovementioned demonstrates the relevance of this study. Comparative analysis of Ukrainian and European Union legislation based on dialectical, comparative, analytic, synthetic and comprehensive research methods were used in this article. In Ukraine general rate of VAT for all business activities is 20 %. But for medical devices, Tax Code of Ukraine provides special rules. VAT rate of 7% for transactions supplies into Ukraine and imported into the customs territory of Ukraine of medical products on the list approved by the Cabinet. The list generated by the medical product name and nomenclature code that does not correspond to European experience and Council Directive 2006/112/EC. In our opinion, reduced VAT rates should to be established for all medical devices that are in a stream of commerce, have all necessary documents, that proved their quality and safety and fall under definition of medical devices.
Lafrenière, Nelson M; Mudrik, Jared M; Ng, Alphonsus H C; Seale, Brendon; Spooner, Neil; Wheeler, Aaron R
2015-04-07
There is great interest in the development of integrated tools allowing for miniaturized sample processing, including solid phase extraction (SPE). We introduce a new format for microfluidic SPE relying on C18-functionalized magnetic beads that can be manipulated in droplets in a digital microfluidic platform. This format provides the opportunity to tune the amount (and potentially the type) of stationary phase on-the-fly, and allows the removal of beads after the extraction (to enable other operations in same device-space), maintaining device reconfigurability. Using the new method, we employed a design of experiments (DOE) operation to enable automated on-chip optimization of elution solvent composition for reversed phase SPE of a model system. Further, conditions were selected to enable on-chip fractionation of multiple analytes. Finally, the method was demonstrated to be useful for online cleanup of extracts from dried blood spot (DBS) samples. We anticipate this combination of features will prove useful for separating a wide range of analytes, from small molecules to peptides, from complex matrices.
Daskalakis, Markos I; Magoulas, Antonis; Kotoulas, Georgios; Katsikis, Ioannis; Bakolas, Asterios; Karageorgis, Aristomenis P; Mavridou, Athena; Doulia, Danae; Rigas, Fotis
2014-08-01
Bacterially induced calcium carbonate precipitation of a Cupriavidus metallidurans isolate was investigated to develop an environmentally friendly method for restoration and preservation of ornamental stones. Biomineralization performance was carried out in a growth medium via a Design of Experiments (DoE) approach using, as design factors, the temperature, growth medium concentration, and inoculum concentration. The optimum conditions were determined with the aid of consecutive experiments based on response surface methodology (RSM) and were successfully validated thereafter. Statistical analysis can be utilized as a tool for screening bacterial bioprecipitation as it considerably reduced the experimental time and effort needed for bacterial evaluation. Analytical methods provided an insight to the biomineral characteristics, and sonication tests proved that our isolate could create a solid new layer of vaterite on marble substrate withstanding sonication forces. C. metallidurans ACA-DC 4073 provided a compact vaterite layer on the marble substrate with morphological characteristics that assisted in its differentiation. The latter proved valuable during spraying minimum amount of inoculated media on marble substrate under conditions close to an in situ application. A sufficient and clearly distinguishable layer was identified.
Knight, Andrew W.; Chiarizia, Renato; Soderholm, L.
2017-05-10
In this paper, the extraction behavior of a quaternary alkylammonium salt extractant was investigated for its selectivity for trivalent actinides over trivalent lanthanides in nitrate and thiocyanate media. The selectivity was evaluated by solvent extraction experiments through radiochemical analysis of 241Am and 152/154Eu. Solvent extraction distribution and slope-analysis experiments were performed with americium(III) and europium(III) with respect to the ligand (nitrate and thiocyanate), extractant, and metal (europium only) concentrations. Further evaluation of the equilibrium expression that governs the extraction process indicated the appropriate use of the saturation method for estimation of the aggregation state of quaternary ammonium extractants in themore » organic phase. From the saturation method, we observed an average aggregation number of 5.4 ± 0.8 and 8.5 ± 0.9 monomers/aggregate for nitrate and thiocyanate, respectively. Through a side-by-side comparison of the nitrate and thiocyanate forms, we discuss the potential role of the aggregation in the increased selectivity for trivalent actinides over trivalent lanthanides in thiocyanate media.« less
Video-mediated communication to support distant family connectedness.
Furukawa, Ryoko; Driessnack, Martha
2013-02-01
It can be difficult to maintain family connections with geographically distant members. However, advances in computer-human interaction (CHI) systems, including video-mediated communication (VMC) are emerging. While VMC does not completely substitute for physical face-to-face communication, it appears to provide a sense of virtual copresence through the addition of visual and contextual cues to verbal communication between family members. The purpose of this study was to explore current patterns of VMC use, experiences, and family functioning among self-identified VMC users separated geographically from their families. A total of 341 participants (ages 18 to above 70) completed an online survey and Family APGAR. Ninty-six percent of the participants reported that VMC was the most common communication method used and 60% used VMC at least once/week. The most common reason cited for using VMC over other methods of communication was the addition of visual cues. A significant difference between the Family APGAR scores and the number of positive comments about VMC experience was also found. This exploratory study provides insight into the acceptance of VMC and its usefulness in maintaining connections with distant family members.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Andrew W.; Chiarizia, Renato; Soderholm, L.
In this paper, the extraction behavior of a quaternary alkylammonium salt extractant was investigated for its selectivity for trivalent actinides over trivalent lanthanides in nitrate and thiocyanate media. The selectivity was evaluated by solvent extraction experiments through radiochemical analysis of 241Am and 152/154Eu. Solvent extraction distribution and slope-analysis experiments were performed with americium(III) and europium(III) with respect to the ligand (nitrate and thiocyanate), extractant, and metal (europium only) concentrations. Further evaluation of the equilibrium expression that governs the extraction process indicated the appropriate use of the saturation method for estimation of the aggregation state of quaternary ammonium extractants in themore » organic phase. From the saturation method, we observed an average aggregation number of 5.4 ± 0.8 and 8.5 ± 0.9 monomers/aggregate for nitrate and thiocyanate, respectively. Through a side-by-side comparison of the nitrate and thiocyanate forms, we discuss the potential role of the aggregation in the increased selectivity for trivalent actinides over trivalent lanthanides in thiocyanate media.« less
Face recognition in newly hatched chicks at the onset of vision.
Wood, Samantha M W; Wood, Justin N
2015-04-01
How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision. (c) 2015 APA, all rights reserved).
Effects of Learning Experience on Forgetting Rates of Item and Associative Memories
ERIC Educational Resources Information Center
Yang, Jiongjiong; Zhan, Lexia; Wang, Yingying; Du, Xiaoya; Zhou, Wenxi; Ning, Xueling; Sun, Qing; Moscovitch, Morris
2016-01-01
Are associative memories forgotten more quickly than item memories, and does the level of original learning differentially influence forgetting rates? In this study, we addressed these questions by having participants learn single words and word pairs once (Experiment 1), three times (Experiment 2), and six times (Experiment 3) in a massed…
Perceptions of the Physical Education Doctoral Experience: Does Previous Teaching Experience Matter?
ERIC Educational Resources Information Center
Richards, K. Andrew R.; McLoughlin, Gabriella M.; Gaudreault, Karen Lux; Shiver, Victoria Nicole
2018-01-01
In the United States, physical education doctoral programs place great stock in recruiting students who have prior in-service teaching experience. However, little is known about how this experience influences perceptions of doctoral education. We conducted this cross-sectional, exploratory study to develop an initial understanding of how prior…
Nozari, Nazbanou; Woodard, Kristina; Thompson-Schill, Sharon L.
2014-01-01
Cathodal Transcranial Direct Current Stimulation (C-tDCS) has been reported, across different studies, to facilitate or hinder performance, or simply to have no tangible effect on behavior. This discrepancy is most prominent when C-tDCS is used to alter a cognitive function, questioning the assumption that cathodal stimulation always compromises performance. In this study, we aimed to study the effect of two variables on performance in a simple cognitive task (letter Flanker), when C-tDCS was applied to the left prefrontal cortex (PFC): (1) the time of testing relative to stimulation (during or after), and (2) the nature of the cognitive activity during stimulation in case of post-tDCS testing. In three experiments, we had participants either perform the Flanker task during C-tDCS (Experiment 1), or after C-tDCS. When the Flanker task was administered after C-tDCS, we varied whether during stimulation subjects were engaged in activities that posed low (Experiment 2) or high (Experiment 3) demands on the PFC. Our findings show that the nature of the task during C-tDCS has a systematic influence on the outcome, while timing per se does not. PMID:24409291
NASA Astrophysics Data System (ADS)
Gay, Andrea
This study investigated the introduction of curriculum innovations into an introductory organic chemistry laboratory course. Pre-existing experiments in a traditional course were re-written in a broader societal context. Additionally, a new laboratory notebook methodology was introduced, using the Decision/Explanation/Observation/Inference (DEOI) format that required students to explicitly describe the purpose of procedural steps and the meanings of observations. Experts in organic chemistry, science writing, and chemistry education examined the revised curriculum and deemed it appropriate. The revised curriculum was introduced into two sections of organic chemistry laboratory at Columbia University. Field notes were taken during the course, students and teaching assistants were interviewed, and completed student laboratory reports were examined to ascertain the impact of the innovations. The contextualizations were appreciated for making the course more interesting; for lending a sense of purpose to the study of chemistry; and for aiding in students' learning. Both experts and students described a preference for more extensive connections between the experiment content and the introduced context. Generally, students preferred the DEOI method to journal-style laboratory reports believing it to be more efficient and more focused on thinking than stylistic formalities. The students claimed that the DEOI method aided their understanding of the experiments and helped scaffold their thinking, though some students thought that the method was over-structured and disliked the required pre-laboratory work. The method was used in two distinct manners; recursively writing and revising as intended and concept contemplation only after experiment completion. The recursive use may have been influenced by TA attitudes towards the revisions and seemed to engender a sense of preparedness. Students' engagement with the contextualizations and the DEOI method highlight the need for laboratory curricula that center on the best means to engage students in understanding, rather than simply providing the best examples for transmitting content.
Burns, Gully A P C; Dasigi, Pradeep; de Waard, Anita; Hovy, Eduard H
2016-01-01
Automated machine-reading biocuration systems typically use sentence-by-sentence information extraction to construct meaning representations for use by curators. This does not directly reflect the typical discourse structure used by scientists to construct an argument from the experimental data available within a article, and is therefore less likely to correspond to representations typically used in biomedical informatics systems (let alone to the mental models that scientists have). In this study, we develop Natural Language Processing methods to locate, extract, and classify the individual passages of text from articles' Results sections that refer to experimental data. In our domain of interest (molecular biology studies of cancer signal transduction pathways), individual articles may contain as many as 30 small-scale individual experiments describing a variety of findings, upon which authors base their overall research conclusions. Our system automatically classifies discourse segments in these texts into seven categories (fact, hypothesis, problem, goal, method, result, implication) with an F-score of 0.68. These segments describe the essential building blocks of scientific discourse to (i) provide context for each experiment, (ii) report experimental details and (iii) explain the data's meaning in context. We evaluate our system on text passages from articles that were curated in molecular biology databases (the Pathway Logic Datum repository, the Molecular Interaction MINT and INTACT databases) linking individual experiments in articles to the type of assay used (coprecipitation, phosphorylation, translocation etc.). We use supervised machine learning techniques on text passages containing unambiguous references to experiments to obtain baseline F1 scores of 0.59 for MINT, 0.71 for INTACT and 0.63 for Pathway Logic. Although preliminary, these results support the notion that targeting information extraction methods to experimental results could provide accurate, automated methods for biocuration. We also suggest the need for finer-grained curation of experimental methods used when constructing molecular biology databases. © The Author(s) 2016. Published by Oxford University Press.
Multi-parametric survey for archaeology: how and why, or how and why not?
NASA Astrophysics Data System (ADS)
Hesse, Albert
1999-03-01
Many papers or conference presentations, particularly over the last ten years, have referred to multi-parametric geophysical surveys and integrated interpretations in archaeological prospection. Several experiments of this kind have been undertaken by our laboratory, with mostly fascinating results, but our experience leads us to be rather suspicious of the over-systematic choice of extreme solutions and we would recommend an appropriate and balanced choice, within the limits of the budget available for an operation, between the two following procedures: 1) Routine survey with an extremely large variety of instruments: this allows a better understanding of the underground situation than survey with a single instrument but reduces the area that can be surveyed. A limited number of specific circumstances should lead one to adopt this option. They include: previous knowledge or equally previous ignorance of the targets under investigation, preliminary selection of the most efficient method on a scientific and economic basis, comparative experiments for the validation of new tools, specific detection of targets of different nature into the ground as well as uncertainty about the efficiency of each available method for the actual nature of the investigated site. 2) Survey of a much larger area with only one method, chosen because it is particularly fast and efficient: there is an obvious value in extensive exploration in order to evaluate the size, distribution and limits of a large number of archaeological features. The strict selection of appropriate methods, chosen to meet the aims of a project should consider not only geophysics but all kinds of conventional or non-conventional archaeological methods as well, brought together to permit an integrated interpretation. This highly specialized job does not fall within the normal experience of exploration geophysicists who usually deal with geological features or most field archaeologists who are mainly involved in excavations. It must be undertaken by particularly trained operators, whether they belong to private companies (under appropriate official control) or to public organizations.
Single and Double Photoionization of Mg
NASA Astrophysics Data System (ADS)
Abdel-Naby, Shahin; Pindzola, M. S.; Colgan, J.
2014-05-01
Single and double photoionization cross sections for Mg are calculated using a time-dependent close-coupling method. The correlation between the two 3 s subshell electrons of Mg is obtained by relaxation of the close-coupled equations in imaginary time. An implicit method is used to propagate the close-coupled equations in real time to obtain single and double ionization cross sections for Mg. Energy and angle triple differential cross sections for double photoionization at equal energy sharing of E1 =E2 = 16 . 4 eV are compared with Elettra experiments and previous theoretical calculations. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.
Knowledge acquisition and rapid protyping of an expert system: Dealing with real world problems
NASA Technical Reports Server (NTRS)
Bailey, Patrick A.; Doehr, Brett B.
1988-01-01
The knowledge engineering and rapid prototyping phases of an expert system that does fault handling for a Solid Amine, Water Desorbed CO2 removal assembly for the Environmental Control and Life Support System for space based platforms are addressed. The knowledge acquisition phase for this project was interesting because it could not follow the textbook examples. As a result of this, a variety of methods were used during the knowledge acquisition task. The use of rapid prototyping and the need for a flexible prototype suggested certain types of knowledge representation. By combining various techniques, a representative subset of faults and a method for handling those faults was achieved. The experiences should prove useful for developing future fault handling expert systems under similar constraints.
Virtual reality for pain and anxiety management in children
Arane, Karen; Behboudi, Amir; Goldman, Ran D.
2017-01-01
Abstract Question Pain and anxiety are common in children who need procedures such as administering vaccines or drawing blood. Recent reports have described the use of virtual reality (VR) as a method of distraction during such procedures. How does VR work in reducing pain and anxiety in pediatric patients and what are the potential uses for it? Answer Recent studies explored using VR with pediatric patients undergoing procedures ranging from vaccinations and intravenous injections to laceration repair and dressing changes for burn wounds. Interacting with immersive VR might divert attention, leading to a slower response to incoming pain signals. Preliminary results have shown that VR is effective, either alone or in combination with standard care, in reducing the pain and anxiety patients experience compared with standard care or other distraction methods. PMID:29237632
NASA Astrophysics Data System (ADS)
Muttalib, M. Firdaus A.; Chen, Ruiqi Y.; Pearce, S. J.; Charlton, Martin D. B.
2017-11-01
In this paper, we demonstrate the optimization of reactive-ion etching (RIE) parameters for the fabrication of tantalum pentoxide (Ta2O5) waveguide with chromium (Cr) hard mask in a commercial OIPT Plasmalab 80 RIE etcher. A design of experiment (DOE) using Taguchi method was implemented to find optimum RF power, mixture of CHF3 and Ar gas ratio, and chamber pressure for a high etch rate, good selectivity, and smooth waveguide sidewall. It was found that the optimized etch condition obtained in this work were RF power = 200 W, gas ratio = 80 %, and chamber pressure = 30 mTorr with an etch rate of 21.6 nm/min, Ta2O5/Cr selectivity ratio of 28, and smooth waveguide sidewall.
NASA Astrophysics Data System (ADS)
Ishikawa, Kaoru; Nakamura, Taro; Osumi, Hisashi
A reliable control method is proposed for multiple loop control system. After a feedback loop failure, such as case of the sensor break down, the control system becomes unstable and has a big fluctuation even if it has a disturbance observer. To cope with this problem, the proposed method uses an equivalent transfer function (ETF) as active redundancy compensation after the loop failure. The ETF is designed so that it does not change the transfer function of the whole system before and after the loop failure. In this paper, the characteristic of reliable control system that uses an ETF and a disturbance observer is examined by the experiment that uses the DC servo motor for the current feedback loop failure in the position servo system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Gary A.
The US Department of Energy (DOE) Nuclear Energy Research Initiative funded the design and construction of the Seven Percent Critical Experiment (7uPCX) at Sandia National Laboratories. The start-up of the experiment facility and the execution of the experiments described here were funded by the DOE Nuclear Criticality Safety Program. The 7uPCX is designed to investigate critical systems with fuel for light water reactors in the enrichment range above 5% 235U. The 7uPCX assembly is a water-moderated and -reflected array of aluminum-clad square-pitched U(6.90%)O 2 fuel rods.
Public Health Policy and Experience of the 2009 H1N1 Influenza Pandemic in Pune, India
Purohit, Vidula; Kudale, Abhay; Sundaram, Neisha; Joseph, Saju; Schaetti, Christian; Weiss, Mitchell G.
2018-01-01
Background: Prior experience and the persisting threat of influenza pandemic indicate the need for global and local preparedness and public health response capacity. The pandemic of 2009 highlighted the importance of such planning and the value of prior efforts at all levels. Our review of the public health response to this pandemic in Pune, India, considers the challenges of integrating global and national strategies in local programmes and lessons learned for influenza pandemic preparedness. Methods: Global, national and local pandemic preparedness and response plans have been reviewed. In-depth interviews were undertaken with district health policy-makers and administrators who coordinated the pandemic response in Pune. Results: In the absence of a comprehensive district-level pandemic preparedness plan, the response had to be improvised. Media reporting of the influenza pandemic and inaccurate information that was reported at times contributed to anxiety in the general public and to widespread fear and panic. Additional challenges included inadequate public health services and reluctance of private healthcare providers to treat people with flu-like symptoms. Policy-makers developed a response strategy that they referred to as the Pune plan, which relied on powers sanctioned by the Epidemic Act of 1897 and resources made available by the union health ministry, state health department and a government diagnostic laboratory in Pune. Conclusion: The World Health Organization’s (WHO’s) global strategy for pandemic control focuses on national planning, but state-level and local experience in a large nation like India shows how national planning may be adapted and implemented. The priority of local experience and requirements does not negate the need for higher level planning. It does, however, indicate the importance of local adaptability as an essential feature of the planning process. Experience and the implicit Pune plan that emerged are relevant for pandemic preparedness and other public health emergencies. PMID:29524939
Training Synesthetic Letter-color Associations by Reading in Color
Colizoli, Olympia; Murre, Jaap M. J.; Rouw, Romke
2014-01-01
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure. PMID:24638033
The role of figure-ground segregation in change blindness.
Landman, Rogier; Spekreijse, Henk; Lamme, Victor A F
2004-04-01
Partial report methods have shown that a large-capacity representation exists for a few hundred milliseconds after a picture has disappeared. However, change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently. What happens to the large-capacity representation? New input after the first image may interfere, but this is likely to depend on the characteristics of the new input. In our first experiment, we show that a display containing homogeneous image elements between changing images does not render the large-capacity representation unavailable. Interference occurs when these new elements define objects. On that basis we introduce a new method to produce change blindness: The second experiment shows that change blindness can be induced by redefining figure and background, without an interval between the displays. The local features (line segments) that defined figures and background were swapped, while the contours of the figures remained where they were. Normally, changes are easily detected when there is no interval. However, our paradigm results in massive change blindness. We propose that in a change blindness experiment, there is a large-capacity representation of the original image when it is followed by a homogeneous interval display, but that change blindness occurs whenever the changed image forces resegregation of figures from the background.
Patterned mist deposition of tri-colour CdSe/ZnS quantum dot films toward RGB LED devices
NASA Astrophysics Data System (ADS)
Pickering, S.; Kshirsagar, A.; Ruzyllo, J.; Xu, J.
2012-06-01
In this experiment a technique of mist deposition was explored as a way to form patterned ultra-thin-films of CdSe/ZnS core/shell nanocrystalline quantum dots using colloidal solutions. The objective of this study was to investigate the feasibility of mist deposition as a patterning method for creating multicolour quantum dot light emitting diodes. Mist deposition was used to create three rows of quantum dot light emitting diodes on a single device with each row having a separate colour. The colours chosen were red, green and yellow with corresponding peak wavelengths of 620 nm, 558 nm, and 587 nm. The results obtained from this experiment show that it is possible to create multicolour devices on a single substrate. The peak brightnesses obtained in this experiment for the red, green, and yellow were 508 cd/m, 507 cd/m, and 665 cd/m, respectively. The similar LED brightness is important in display technologies using colloidal quantum dots in a precursor solution to ensure one colour does not dominate the emitted spectrum. Results obtained in-terms of brightness were superior to those achieved with inkjet deposition. This study has shown that mist deposition is a viable method for patterned deposition applied to quantum dot light emitting diode display technologies.
NASA Astrophysics Data System (ADS)
McKinstry, Chris
The present article describes a possible method for the automatic discovery of a universal human semantic-affective hyperspatial approximation of the human subcognitive substrate - the associative network which French (1990) asserts is the ultimate foundation of the human ability to pass the Turing Test - that does not require a machine to have direct human experience or a physical human body. This method involves automatic programming - such as Koza's genetic programming (1992) - guided in the discovery of the proposed universal hypergeometry by feedback from a Minimum Intelligent Signal Test or MIST (McKinstry, 1997) constructed from a very large number of human validated probabilistic propositions collected from a large population of Internet users. It will be argued that though a lifetime of human experience is required to pass a rigorous Turing Test, a probabilistic propositional approximation of this experience can be constructed via public participation on the Internet, and then used as a fitness function to direct the artificial evolution of a universal hypergeometry capable of classifying arbitrary propositions. A model of this hypergeometry will be presented; it predicts Miller's "Magical Number Seven" (1956) as the size of human short-term memory from fundamental hypergeometric properties. A system that can lead to the generation of novel propositions or "artificial thoughts" will also be described.
ERIC Educational Resources Information Center
Peterson, Christopher
2009-01-01
Positive psychology is a deliberate correction to the focus of psychology on problems. Positive psychology does not deny the difficulties that people may experience but does suggest that sole attention to disorder leads to an incomplete view of the human condition. Positive psychologists concern themselves with four major topics: (1) positive…
Aluminum Data Measurements and Evaluation for Criticality Safety Applications
NASA Astrophysics Data System (ADS)
Leal, L. C.; Guber, K. H.; Spencer, R. R.; Derrien, H.; Wright, R. Q.
2002-12-01
The Defense Nuclear Facility Safety Board (DNFSB) Recommendation 93-2 motivated the US Department of Energy (DOE) to develop a comprehensive criticality safety program to maintain and to predict the criticality of systems throughout the DOE complex. To implement the response to the DNFSB Recommendation 93-2, a Nuclear Criticality Safety Program (NCSP) was created including the following tasks: Critical Experiments, Criticality Benchmarks, Training, Analytical Methods, and Nuclear Data. The Nuclear Data portion of the NCSP consists of a variety of differential measurements performed at the Oak Ridge Electron Linear Accelerator (ORELA) at the Oak Ridge National Laboratory (ORNL), data analysis and evaluation using the generalized least-squares fitting code SAMMY in the resolved, unresolved, and high energy ranges, and the development and benchmark testing of complete evaluations for a nuclide for inclusion into the Evaluated Nuclear Data File (ENDF/B). This paper outlines the work performed at ORNL to measure, evaluate, and test the nuclear data for aluminum for applications in criticality safety problems.
[Psychosomatics is "expensive"].
Hnízdil, J; Savlík, J
2005-01-01
Experience shoves that number of diseases where recognition and treatment is without limits of classical medicine is rising, however it is fully within the competence of psychosomatic approach. It does not concern the classical classification into organic and functional defects, but it concerns the possibility of complex approach. The theorem of "diagnosis per exclusionem" is still valid, as well as it is true that the means of medicine end at its biological limitations. We consider stressing in our article that psychosomatic diseases or psychosomatic patients do not exist and psychosomatics is not and independent specialization. Psychosomatics is, as inseparable unity of psychic and somatic activities, each human being. Complex biopsychosocial (psychosomatic) approach is the way of thinking and work, which considers human in unrepeatable oneness and context of his life. It does not mean to underestimate objective biological findings and results of instrumentally investigation, but their implanting into the complex network of consequences of the patient's life in order to choose the most appropriate methods of the care of the individual in healthiness and disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Making self-assessment a ``cultural norm`` at the DOE Office of Energy Research (ER) laboratories has been a tremendous challenge. In an effort to provide a forum for the ER laboratories to share their self-assessment program implementation experiences, the Lawrence Berkeley Laboratory hosted a Self-Assessment Workshop: July 1993. The workshop was organized to cover such areas as: DOE`s vision of self-assessment; what makes a workable program; line management experiences; how to identify root causes and trends; integrating quality assurance, conduct of operations, and self-assessment; and going beyond environment, safety, and health. Individuals from the ER laboratories wishing to participate in themore » workshop were invited to speak on topics of their choice. The workshop was organized to cover general topics in morning presentations to all attendees and to cover selected topics at afternoon breakout sessions. This report summarizes the presentations and breakout discussions.« less
Motion onset does not capture attention when subsequent motion is "smooth".
Sunny, Meera Mary; von Mühlenen, Adrian
2011-12-01
Previous research on the attentional effects of moving objects has shown that motion per se does not capture attention. However, in later studies it was argued that the onset of motion does capture attention. Here, we show that this motion-onset effect critically depends on motion jerkiness--that is, the rate at which the moving stimulus is refreshed. Experiment 1 used search displays with a static, a motion-onset, and an abrupt-onset stimulus, while systematically varying the refresh rate of the moving stimulus. The results showed that motion onset only captures attention when subsequent motion is jerky (8 and 17 Hz), not when it is smooth (33 and 100 Hz). Experiment 2 replaced motion onset with continuous motion, showing that motion jerkiness does not affect how continuous motion is processed. These findings do not support accounts that assume a special role for motion onset, but they are in line with the more general unique-event account.
Jadhav, Sushant B; Mane, Rahul M; Narayanan, Kalyanraman L; Bhosale, Popatrao N
2016-10-17
A novel, stability indicating, reverse phase high-performance liquid chromatography (RP-HPLC) method was developed to determine the S -isomer of linagliptin (LGP) in linagliptin and metformin hydrochloride (MET HCl) tablets (LGP-MET HCl) by implementing design of experiment (DoE), i.e., two-level, full factorial design (2³ + 3 centre points = 11 experiments) to understand the critical method parameters (CMP) and its relation with the critical method attribute (CMA), and to ensure robustness of the method. The separation of the S -isomer, LGP and MET HCl in the presence of their impurities was achieved on Chiralpak ® IA-3 ( Amylose tris (3, 5-dimethylphenylcarbamate ), immobilized on 3 µm silica gel) stationary phase (250 × 4.6 mm, 3 µm) using isocratic elution and detector wavelength at 225 nm with a flow rate of 0.5 mL·min -1 , an injection volume of 10 µL with a sample cooler (5 °C) and column oven temperature of 25 °C. Ethanol:Methanol:Monoethanolamine (EtOH:MeOH:MEA) in the ratio of 60:40:0.2 v / v / v was used as a mobile phase. The developed method was validated in accordance with international council for harmonisation (ICH) guidelines and was applied for the estimation of the S -isomer of LGP in LGP-MET HCl tablets. The same method also can be extended for the estimation of the S -isomer in LGP dosage forms.
A Method to Constrain Genome-Scale Models with 13C Labeling Data
García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.
2015-01-01
Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153
Accelerating Vaccine Formulation Development Using Design of Experiment Stability Studies.
Ahl, Patrick L; Mensch, Christopher; Hu, Binghua; Pixley, Heidi; Zhang, Lan; Dieter, Lance; Russell, Ryann; Smith, William J; Przysiecki, Craig; Kosinski, Mike; Blue, Jeffrey T
2016-10-01
Vaccine drug product thermal stability often depends on formulation input factors and how they interact. Scientific understanding and professional experience typically allows vaccine formulators to accurately predict the thermal stability output based on formulation input factors such as pH, ionic strength, and excipients. Thermal stability predictions, however, are not enough for regulators. Stability claims must be supported by experimental data. The Quality by Design approach of Design of Experiment (DoE) is well suited to describe formulation outputs such as thermal stability in terms of formulation input factors. A DoE approach particularly at elevated temperatures that induce accelerated degradation can provide empirical understanding of how vaccine formulation input factors and interactions affect vaccine stability output performance. This is possible even when clear scientific understanding of particular formulation stability mechanisms are lacking. A DoE approach was used in an accelerated 37(°)C stability study of an aluminum adjuvant Neisseria meningitidis serogroup B vaccine. Formulation stability differences were identified after only 15 days into the study. We believe this study demonstrates the power of combining DoE methodology with accelerated stress stability studies to accelerate and improve vaccine formulation development programs particularly during the preformulation stage. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Does Copper Metal React with Acetic Acid?
ERIC Educational Resources Information Center
DeMeo, Stephen
1997-01-01
Describes an activity that promotes analytical thinking and problem solving. Gives students experience with important scientific processes that can be generalized to other new laboratory experiences. Provides students with the opportunity to hypothesize answers, control variables by designing an experiment, and make logical deductions based on…
Does Cu(acac)2 Quench Benzene Fluorescence? A Physical Chemistry Experiment.
ERIC Educational Resources Information Center
Marciniak, Bronislaw
1986-01-01
Describes a laboratory experiment in which benzene fluorescence is quenched by bis(acetylacetonato) copper(II). Discusses how this experiment can demonstrate a special technique used in the field of photochemistry. Provides an outline of the experimental procedure and discusses its results. (TW)
ERIC Educational Resources Information Center
Gino, Francesca; Argote, Linda; Miron-Spektor, Ella; Todorova, Gergana
2010-01-01
How does prior experience influence team creativity? We address this question by examining the effects of task experience acquired directly and task experience acquired vicariously from others on team creativity in a product-development task. Across three laboratory studies, we find that direct task experience leads to higher levels of team…
Negotiating parenthood: Experiences of economic hardship among parents with cognitive difficulties.
Fernqvist, Stina
2015-09-01
People with cognitive difficulties often have scarce economic resources, and parents with cognitive difficulties are no exception. In this article, parents' experiences are put forth and discussed, for example, how does economic hardship affect family life? How do the parents experience support, what kind of strain does the scarce economy put on their situation and how are their children coping? The data consist of interviews with parents living in this often problematic situation. Experiences of poverty and how it can be related to - and understood in the light of - cognitive difficulties and notions of parenthood and children's agency are scarcely addressed in the current research. The findings suggest that experiences of poverty are often associated with the limitations caused by cognitive difficulties. Poverty may thus be articulated as one aspect of the stigma they can experience due to their impairments, not least in relation to their children and naturalized discourses on parenthood. © The Author(s) 2015.
Outstanding questions: physics beyond the Standard Model.
Ellis, John
2012-02-28
The Standard Model of particle physics agrees very well with experiment, but many important questions remain unanswered, among them are the following. What is the origin of particle masses and are they due to a Higgs boson? How does one understand the number of species of matter particles and how do they mix? What is the origin of the difference between matter and antimatter, and is it related to the origin of the matter in the Universe? What is the nature of the astrophysical dark matter? How does one unify the fundamental interactions? How does one quantize gravity? In this article, I introduce these questions and discuss how they may be addressed by experiments at the Large Hadron Collider, with particular attention to the search for the Higgs boson and supersymmetry.
Understanding the perspectives of health service staff on the Friends and Family Test.
Leggat, Sandra G
2016-06-01
Objectives The present study was designed to determine what staff consider when asked to respond to the Friends and Family Test question. Methods Over 300 health service staff responded to an online questionnaire exploring whether they would recommend treatment at their organisation to friends and family (Friends and Family Test). Results Staff identified staff attitudes and behaviours, the busyness of the health service and quality of care as themes that affected their recommendation. A considerable number of staff also identified factors largely outside the control of the health service as influencing their response. Conclusions Majority of respondents based their perceptions on personal expectations, with smaller numbers citing personal experience and hearsay. Staff would need to see changes both in the quality of care and management practice to amend their recommendation on the Friends and Family Test. What is known about the topic? The Friends and Family Test is seen as a useful tool to gather the opinions of patients and staff on the patient experience, yet there has been little validation of this question. What does this paper add? The present study suggests that, as currently worded, the question does not reliably report staff perceptions regarding patient experience. The study illustrates that the relationship with the organisation and perceptions of effective management are linked to staff responses. What are the implications for practitioners? The Family and Friends Test question may need to be more clearly focused to gather the desired information. Improvement on this indicator is only likely to be seen when management teams are meeting the expectations of staff for good management practice.
Robust reconstruction of a signal from its unthresholded recurrence plot subject to disturbances
NASA Astrophysics Data System (ADS)
Sipers, Aloys; Borm, Paul; Peeters, Ralf
2017-02-01
To make valid inferences from recurrence plots for time-delay embedded signals, two underlying key questions are: (1) to what extent does an unthresholded recurrence (URP) plot carry the same information as the signal that generated it, and (2) how does the information change when the URP gets distorted. We studied the first question in our earlier work [1], where it was shown that the URP admits the reconstruction of the underlying signal (up to its mean and a choice of sign) if and only if an associated graph is connected. Here we refine this result and we give an explicit condition in terms of the embedding parameters and the discrete Fourier spectrum of the URP. We also develop a method for the reconstruction of the underlying signal which overcomes several drawbacks that earlier approaches had. To address the second question we investigate robustness of the proposed reconstruction method under disturbances. We carry out two simulation experiments which are characterized by a broad band and a narrow band spectrum respectively. For each experiment we choose a noise level and two different pairs of embedding parameters. The conventional binary recurrence plot (RP) is obtained from the URP by thresholding and zero-one conversion, which can be viewed as severe distortion acting on the URP. Typically the reconstruction of the underlying signal from an RP is expected to be rather inaccurate. However, by introducing the concept of a multi-level recurrence plot (MRP) we propose to bridge the information gap between the URP and the RP, while still achieving a high data compression rate. We demonstrate the working of the proposed reconstruction procedure on MRPs, indicating that MRPs with just a few discretization levels can usually capture signal properties and morphologies more accurately than conventional RPs.
Compressed Natural Gas (CNG) Transit Bus Experience Survey: April 2009--April 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, R.; Horne, D. B.
2010-09-01
This survey was commissioned by the U.S. Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to collect and analyze experiential data and information from a cross-section of U.S. transit agencies with varying degrees of compressed natural gas (CNG) bus and station experience. This information will be used to assist DOE and NREL in determining areas of success and areas where further technical or other assistance might be required, and to assist them in focusing on areas judged by the CNG transit community as priority items.
Cottam, S; Paul, S N; Doughty, O J; Carpenter, L; Al-Mousawi, A; Karvounis, S; Done, D J
2011-09-01
Introduction. Hearing voices occurs in people without psychosis. Why hearing voices is such a key pathological feature of psychosis whilst remaining a manageable experience in nonpsychotic people is yet to be understood. We hypothesised that religious voice hearers would interpret voices in accordance with their beliefs and therefore experience less distress. Methods. Three voice hearing groups, which comprised: 20 mentally healthy Christians, 15 Christian patients with psychosis, and 14 nonreligious patients with psychosis. All completed (1) questionnaires with rating scales measuring the perceptual and emotional aspects of hallucinated voices, and (2) a semistructured interview to explore whether religious belief is used to make sense of the voice hearing experience. Results. The three groups had perceptually similar experiences when hearing the voices. Mentally healthy Christians appeared to assimilate the experience with their religious beliefs (schematic processing) resulting in positive interpretations. Christian patients tended not to assimilate the experience with their religious beliefs, frequently reporting nonreligious interpretations that were predominantly negative. Nearly all participants experienced voices as powerful, but mentally healthy Christians reported the power of voices positively. Conclusion. Religious belief appeared to have a profound, beneficial influence on the mentally healthy Christians' interpretation of hearing voices, but had little or no influence in the case of Christian patients.
Potential Experimental Topics for EGS Collab Experiment 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Mattson, Earl; Blankenship, Douglas
To facilitate the success of FORGE, the DOE GTO has initiated a new research effort, the EGS Collab project, which will utilize readily accessible underground facilities that can refine our understanding of rock mass response to stimulation and provide a test bed at intermediate (~10 m) scale for the validation of thermal-hydrological-mechanical-chemical modeling approaches as well as novel monitoring tools. The first two EGS Experiments 1 and 2 are planned be performed under different stress/fracture conditions, and will evaluate different stimulation processes: Experiment 1 will focus on hydrofracturing of a competent rock mass, while Experiment 2 will concentrate on hydroshearingmore » of a rock mass that contains natural fractures. Experiment 3 is scheduled to begin in 2019 will build off the lessons learned in Experiments 1 and 2 and will investigate alternate stimulation and operation methods to improve heat extraction in an EGS reservoir. This paper evaluates potential experiments that could potentially be conducted in Experiment 3. The two technical parameters defining energy extracted from EGS reservoirs with the highest economic uncertainty and risk are the production well flow rates and the reservoir thermal drawdown rate. A review of historical and currently on-going EGS studies has identified that over 1/2 of the projects have identified heat extraction challenges during their operation associated with these two parameters as well as some additional secondary issues. At present, no EGS reservoir has continuously produced flow rates on the order of 80 kg/s. Short circuiting (i.e. early thermal breakthrough) has been identified in numerous cases. In addition, working fluid loss (i.e. the difference between the injected fluid mass and the extracted fluid mass as compared to the injected mass) has been as high as 90%. Finally, the engineering aspects of operating a true EGS multi-fracture reservoir such as repairing/modifying fractures and controlling working fluid fluxes within multiple fractures for the effective EGS fracture management has not been sufficiently studied. To examine issues such as these, EGS Collab Experiment 3 may be conducted in the testbeds prepared for Experiments 1 and 2 by improving the previously performed stimulations, or conducted at a new site performing new stimulations with alternate method. Potential experiments may include using different stimulation and working fluids, evaluating different stimulation methods, using proppants to enhance permeability, and other high-risk high-reward methods that can be evaluated at the 10-m scale environment.« less
Occurence and prediction of sigma phase in fuel cladding alloys for breeder reactors. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anantatmula, R.P.
1982-01-01
In sodium-cooled fast reactor systems, fuel cladding materials will be exposed for several thousand hours to liquid sodium. Satisfactory performance of the materials depends in part on the sodium compatibility and phase stability of the materials. This paper mainly deals with the phase stability aspect, with particular emphasis on sigma phase formation of the cladding materials upon extended exposures to liquid sodium. A new method of predicting sigma phase formation is proposed for austenitic stainless steels and predictions are compared with the experimental results on fuel cladding materials. Excellent agreement is obtained between theory and experiment. The new method ismore » different from the empirical methods suggested for superalloys and does not suffer from the same drawbacks. The present method uses the Fe-Cr-Ni ternary phase diagram for predicting the sigma-forming tendencies and exhibits a wide range of applicability to austenitic stainless steels and heat-resistant Fe-Cr-Ni alloys.« less
Ratiometric Raman Spectroscopy for Quantification of Protein Oxidative Damage
Jiang, Dongping; Yanney, Michael; Zou, Sige; Sygula, Andrzej
2009-01-01
A novel ratiometric Raman spectroscopic (RMRS) method has been developed for quantitative determination of protein carbonyl levels. Oxidized bovine serum albumin (BSA) and oxidized lysozyme were used as model proteins to demonstrate this method. The technique involves conjugation of protein carbonyls with dinitrophenyl hydrazine (DNPH), followed by drop coating deposition Raman spectral acquisition (DCDR). The RMRS method is easy to implement as it requires only one conjugation reaction, a single spectral acquisition, and does not require sample calibration. Characteristic peaks from both protein and DNPH moieties are obtained in a single spectral acquisition, allowing the protein carbonyl level to be calculated from the peak intensity ratio. Detection sensitivity for the RMRS method is ~0.33 pmol carbonyl/measurement. Fluorescence and/or immunoassay based techniques only detect a signal from the labeling molecule and thus yield no structural or quantitative information for the modified protein while the RMRS technique provides for protein identification and protein carbonyl quantification in a single experiment. PMID:19457432
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
Simplified Dynamic Analysis of Grinders Spindle Node
NASA Astrophysics Data System (ADS)
Demec, Peter
2014-12-01
The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.
Tu, Junchao; Zhang, Liyan
2018-01-12
A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.
34 CFR 643.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION TALENT SEARCH How Does the Secretary Make a... under its expiring Talent Search project. This information includes performance reports, audit reports... reentry of participants to programs of postsecondary education. (Approved by the Office of Management and...
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Magnetic Stirrer Method for the Detection of Trichinella Larvae in Muscle Samples.
Mayer-Scholl, Anne; Pozio, Edoardo; Gayda, Jennifer; Thaben, Nora; Bahn, Peter; Nöckler, Karsten
2017-03-03
Trichinellosis is a debilitating disease in humans and is caused by the consumption of raw or undercooked meat of animals infected with the nematode larvae of the genus Trichinella. The most important sources of human infections worldwide are game meat and pork or pork products. In many countries, the prevention of human trichinellosis is based on the identification of infected animals by means of the artificial digestion of muscle samples from susceptible animal carcasses. There are several methods based on the digestion of meat but the magnetic stirrer method is considered the gold standard. This method allows the detection of Trichinella larvae by microscopy after the enzymatic digestion of muscle samples and subsequent filtration and sedimentation steps. Although this method does not require special and expensive equipment, internal controls cannot be used. Therefore, stringent quality management should be applied throughout the test. The aim of the present work is to provide detailed handling instructions and critical control points of the method to analysts, based on the experience of the European Union Reference Laboratory for Parasites and the National Reference Laboratory of Germany for Trichinella.
An extended algebraic reconstruction technique (E-ART) for dual spectral CT.
Zhao, Yunsong; Zhao, Xing; Zhang, Peng
2015-03-01
Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.
Fast 3D elastic micro-seismic source location using new GPU features
NASA Astrophysics Data System (ADS)
Xue, Qingfeng; Wang, Yibo; Chang, Xu
2016-12-01
In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.
Animal models for studying transport across the blood-brain barrier.
Bonate, P L
1995-01-01
There are many reasons for wishing to determine the rate of uptake of a drug from blood into brain parenchyma. However, when faced with doing so for the first time, choosing a method can be a formidable task. There are at least 7 methods from which to choose: indicator dilution, brain uptake index, microdialysis, external registration, PET scanning, in situ perfusion, and compartmental modeling. Each method has advantages and disadvantages. Some methods require very little equipment while others require equipment that can cost millions of dollars. Some methods require very little technical experience whereas others require complex surgical manipulation. The mathematics alone for the various methods range from simple algebra to complex integral calculus and differential equations. Like most things in science, as the complexity of the technique increases, so does the quantity of information it provides. This review is meant to serve as a starting point for the researcher who wishes to study transport and uptake across the blood-brain barrier in animal models. An overview of the mathematical theory, as well as an introduction to the techniques, is presented.
Erren, Thomas C; Shaw, David M; Wild, Ursula; Groß, J Valérie
2017-01-01
A thought experiment places Henry Ford and Thomas Alva Edison in a modern regulatory environment. In a utopian occupational world devoid of night-shifts or artificial light, Ford wants to experiment with "working through the night". To support Ford's project, Edison offers his patented electric lamps to "turn nights into days". An ethics committee [EC] does not approve the night-work experiment and Utopia's Food and Drug Administration [FDA] does not approve the potential medical device as safe for use by humans. According to the EC and FDA, complex effects on circadian biology and thus safety of work and light at night are not understood. The thought experiment conveys that we should pay more attention to possible risks of work and light at chronobiologically unusual times.
Geophysical Fluid Dynamics Laboratory Open Days at the Woods Hole Oceanographic Institution
NASA Astrophysics Data System (ADS)
Hyatt, Jason; Cenedese, Claudia; Jensen, Anders
2015-11-01
This event was hosted for one week for two consecutive years in 2013 and 2014. It targeted postdocs, graduate students, K-12 students and local community participation. The Geophysical Fluid Dynamics Laboratory at the Woods Hole Oceanographic Institution hosted 10 hands-on demonstrations and displays, with something for all ages, to share the excitement of fluid mechanics and oceanography. The demonstrations/experiments spanned as many fluid mechanics problems as possible in all fields of oceanography and gave insight into using fluids laboratory experiments as a research tool. The chosen experiments were `simple' yet exciting for a 6 year old child, a high school student, a graduate student, and a postdoctoral fellow from different disciplines within oceanography. The laboratory is a perfect environment in which to create excitement and stimulate curiosity. Even what we consider `simple' experiments can fascinate and generate interesting questions from both a 6 year old child and a physics professor. How does an avalanche happen? How does a bath tub vortex form? What happens to waves when they break? How does a hurricane move? Hands-on activities in the fluid dynamics laboratory helped students of all ages in answering these and other intriguing questions. The laboratory experiments/demonstrations were accompanied by `live' videos to assist in the interpretation of the demonstrations. Posters illustrated the oceanographic/scientific applicability and the location on Earth where the dynamics in the experiments occur. Support was given by the WHOI Doherty Chair in Education.
Properties of pTRM in Multidomain Grains and Their Implications for Palaeointensity Measurements
NASA Astrophysics Data System (ADS)
Biggin, A. J.; Michalk, D. M.
2009-05-01
As a consequence of their ubiquity in natural materials, much effort has been expended on trying to understand how 'multidomain' (sensu lato) grains behave in palaeointensity experiments. The known properties of multidomain thermoremanence (MD TRM) will be reviewed here and their implications for various types of palaeointensity experiments will be considered. The Dekkers-Boehnel and (quasi-) perpendicular palaeointensity methods tend to produce more accurate measurements from samples containing MD remanences than do Thellier-Thellier protocols. This is because they apply only a single type of thermal remagnetisation treatment and avoid the interleaving of demagnetisation and remagnetisation treatments which always produces non-ideal behaviour when MD grains are present in the sample. However, this benefit of using a single-heating technique does not apply if the TRM of the sample being measured carries a secondary (e.g. viscous) overprint. A kinematic model of MD TRM predicts that, if a substantial demagnetisation treatment is required to isolate the primary TRM of a sample, then even single-heating methods will produce non-ideal behaviour in the experiment. This effect probably explains why some recently made palaeointensity measurements performed using the Dekkers- Boehnel method on Mexican lavas appeared to produce over-high results. One way around this problem might be to perform the measurements of the remanence in the experiment at temperature instead of always cooling the sample to room temperature. This could enable the optimal experimental behaviour to be preserved in spite of a significant overprint but requires specialist equipment which is not available in all labs. In many palaeointensity experiments, it is simply not possible to avoid all the non-ideal effects associated with MD grains. Furthermore, there is the potential for sources of bias other than MD effects to impact on a palaeointensity experiment (thermochemical alteration being the most obvious) and the design of the experiment should take these into account also. Nonetheless, there are some steps that can be followed in any experiment in order to reduce the amount of bias that MD effects might have on the palaeointensity and these will be outlined.
A Numerical, Literal, and Converged Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Wiesel, William E.
2017-09-01
The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-12-01
The purpose of this Handbook is to establish general training program guidelines for training personnel in developing training for operation, maintenance, and technical support personnel at Department of Energy (DOE) nuclear facilities. TTJA is not the only method of job analysis; however, when conducted properly TTJA can be cost effective, efficient, and self-validating, and represents an effective method of defining job requirements. The table-top job analysis is suggested in the DOE Training Accreditation Program manuals as an acceptable alternative to traditional methods of analyzing job requirements. DOE 5480-20A strongly endorses and recommends it as the preferred method for analyzing jobsmore » for positions addressed by the Order.« less
ERIC Educational Resources Information Center
Barton, Georgina M.; Hartwig, Kay A.; Cain, Melissa
2015-01-01
This paper explores the practicum experience of international students studying in a teacher education course. Much research has investigated the experience of international students during their degree experience but there is limited research that has addressed the practicum; a key component of teacher education. The research that does exist…
Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P
2013-01-01
Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Does retrieval practice enhance learning and transfer relative to restudy for term-definition facts?
Pan, Steven C; Rickard, Timothy C
2017-09-01
In many pedagogical contexts, term-definition facts that link a concept term (e.g., "vision") with its corresponding definition (e.g., "the ability to see") are learned. Does retrieval practice involving retrieval of the term (given the definition) or the definition (given the term) enhance subsequent recall, relative to restudy of the entire fact? Moreover, does any benefit of retrieval practice for the term transfer to later recall of the definition, or vice versa? We addressed those questions in 4 experiments. In each, subjects first studied term-definition facts and then trained on two thirds of the facts using multiple-choice tests with feedback. Half of the test questions involved recalling terms; the other half involved recalling definitions. The remaining facts were either not trained (Experiment 1) or restudied (Experiments 2-4). A 48-hr delayed multiple-choice (Experiments 1-2) or short answer (Experiments 3a-4) final test assessed recall of all terms or all definitions. Replicating and extending prior research, retrieval practice yielded improved recall and positive transfer relative to no training. Relative to restudy, however, retrieval practice consistently enhanced subsequent term retrieval, enhanced subsequent definition retrieval only after repeated practice, and consistently yielded at best minimal positive transfer in either direction. Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Final report for the DOE Early Career Award #DE-SC0003912
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaraman, Arthi
This DoE supported early career project was aimed at developing computational models, theory and simulation methods that would be then be used to predict assembly and morphology in polymer nanocomposites. In particular, the focus was on composites in active layers of devices, containing conducting polymers that act as electron donors and nanoscale additives that act as electron acceptors. During the course this work, we developed the first of its kind molecular models to represent conducting polymers enabling simulations at the experimentally relevant length and time scales. By comparison with experimentally observed morphologies we validated these models. Furthermore, using these modelsmore » and molecular dynamics simulations on graphical processing units (GPUs) we predicted the molecular level design features in polymers and additive that lead to morphologies with optimal features for charge carrier behavior in solar cells. Additionally, we also predicted computationally new design rules for better dispersion of additives in polymers that have been confirmed through experiments. Achieving dispersion in polymer nanocomposites is valuable to achieve controlled macroscopic properties of the composite. The results obtained during the course of this DOE funded project enables optimal design of higher efficiency organic electronic and photovoltaic devices and improve every day life with engineering of these higher efficiency devices.« less
David, Matthias; Borde, Theda; Siedentopf, Friederike
2012-06-01
How large is the number of immigrant women being treated for hyperemesis gravidarum (HG) among the in-patients in a University hospital in Germany? Does migration have an impact on the psychosocial state of HG patients? Does acculturation have an impact on psychosocial distress in HG patients? The following methods were used: retrospective evaluation of all in-patients with HG from 1/1997 to 11/2009, inquiry of a consecutively surveyed group (from 2007 to 2009) of HG in-patients with a questionnaire set: socio-demographic data, questionnaire on psychic distress (SCL-90-R) questionnaire on migration/acculturation, and comparison of German patients and patients with immigration backgrounds as well as among immigrant groups. During the 13-year study period, there were 4.5 times more immigrants treated for HG than native German patients. Compared to the age standardized resident population, the number of women with immigration backgrounds is over-proportionally high. The HG patients scored high in the SCL-90-R scale "somatization" without showing a higher level of psychic distress than the native patients. Experience of migration is an etiological cofactor for HG. The grade of acculturation does not have a significant influence on the psychic well-being of HG patients.
A numerical analysis of a magnetocaloric refrigerator with a 16-layer regenerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Mingkan; Abdelaziz, Omar; Momen, Ayyoub Mehdizadeh
A numerical analysis was conducted to study a room temperature magnetocaloric refrigerator with a 16-layer parallel plates active magnetic regenerator (AMR). Sixteen layers of LaFeMnSiH having different Curie temperatures were employed as magnetocaloric material (MCM) in the regenerator. Measured properties data was used. A transient one dimensional (1D) model was employed, in which a unique numerical method was developed to significantly accelerate the simulation speed of the multi-layer AMR system. As a result, the computation speed of a multi-layer AMR case was very close to the single-layer configuration. The performance of the 16-layer AMR system in different frequencies and utilizationsmore » has been investigated using this model. To optimize the layer length distribution of the 16-layer MCMs in the regenerator, a set of 137 simulations with different MCM distributions based on the Design of Experiments (DoE) method was conducted and the results were analyzed. The results show that the 16-layer AMR system can operate up to 84% of Carnot cycle COP at a temperature span of 41 K, which cannot be obtained using an AMR with fewer layers. Here, the DoE results indicate that for a 16-layer AMR system, the uniform distribution is very close to the optimized design.« less
Subjective measures of unconscious knowledge.
Dienes, Zoltán
2008-01-01
The chapter gives an overview of the use of subjective measures of unconscious knowledge. Unconscious knowledge is knowledge we have, and could very well be using, but we are not aware of. Hence appropriate methods for indicating unconscious knowledge must show that the person (a) has knowledge but (b) does not know that she has it. One way of determining awareness of knowing is by taking confidence ratings after making judgments. If the judgments are above baseline but the person believes they are guessing (guessing criterion) or confidence does not relate to accuracy (zero-correlation criterion) there is evidence of unconscious knowledge. The way these methods can deal with the problem of bias is discussed, as is the use of different types of confidence scales. The guessing and zero-correlation criteria show whether or not the person is aware of knowing the content of the judgment, but not whether the person is aware of what any knowledge was that enabled the judgment. Thus, a distinction is made between judgment and structural knowledge, and it is shown how the conscious status of the latter can also be assessed. Finally, the use of control over the use of knowledge as a subjective measure of judgment knowledge is illustrated. Experiments using artificial grammar learning and a serial reaction time task explore these issues.
Enhancing detection of steady-state visual evoked potentials using individual training data.
Wang, Yijun; Nakanishi, Masaki; Wang, Yu-Te; Jung, Tzyy-Ping
2014-01-01
Although the performance of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) has improved gradually in the past decades, it still does not meet the requirement of a high communication speed in many applications. A major challenge is the interference of spontaneous background EEG activities in discriminating SSVEPs. An SSVEP BCI using frequency coding typically does not have a calibration procedure since the frequency of SSVEPs can be recognized by power spectrum density analysis (PSDA). However, the detection rate can be deteriorated by the spontaneous EEG activities within the same frequency range because phase information of SSVEPs is ignored in frequency detection. To address this problem, this study proposed to incorporate individual SSVEP training data into canonical correlation analysis (CCA) to improve the frequency detection of SSVEPs. An eight-class SSVEP dataset recorded from 10 subjects in a simulated online BCI experiment was used for performance evaluation. Compared to the standard CCA method, the proposed method obtained significantly improved detection accuracy (95.2% vs. 88.4%, p<0.05) and information transfer rates (ITR) (104.6 bits/min vs. 89.1 bits/min, p<0.05). The results suggest that the employment of individual SSVEP training data can significantly improve the detection rate and thereby facilitate the implementation of a high-speed BCI.
A numerical analysis of a magnetocaloric refrigerator with a 16-layer regenerator
Zhang, Mingkan; Abdelaziz, Omar; Momen, Ayyoub Mehdizadeh; ...
2017-10-25
A numerical analysis was conducted to study a room temperature magnetocaloric refrigerator with a 16-layer parallel plates active magnetic regenerator (AMR). Sixteen layers of LaFeMnSiH having different Curie temperatures were employed as magnetocaloric material (MCM) in the regenerator. Measured properties data was used. A transient one dimensional (1D) model was employed, in which a unique numerical method was developed to significantly accelerate the simulation speed of the multi-layer AMR system. As a result, the computation speed of a multi-layer AMR case was very close to the single-layer configuration. The performance of the 16-layer AMR system in different frequencies and utilizationsmore » has been investigated using this model. To optimize the layer length distribution of the 16-layer MCMs in the regenerator, a set of 137 simulations with different MCM distributions based on the Design of Experiments (DoE) method was conducted and the results were analyzed. The results show that the 16-layer AMR system can operate up to 84% of Carnot cycle COP at a temperature span of 41 K, which cannot be obtained using an AMR with fewer layers. Here, the DoE results indicate that for a 16-layer AMR system, the uniform distribution is very close to the optimized design.« less
NASA Astrophysics Data System (ADS)
Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik
2018-07-01
Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.
A numerical analysis of a magnetocaloric refrigerator with a 16-layer regenerator.
Zhang, Mingkan; Abdelaziz, Omar; Momen, Ayyoub M; Abu-Heiba, Ahmad
2017-10-25
A numerical analysis was conducted to study a room temperature magnetocaloric refrigerator with a 16-layer parallel plates active magnetic regenerator (AMR). Sixteen layers of LaFeMnSiH having different Curie temperatures were employed as magnetocaloric material (MCM) in the regenerator. Measured properties data was used. A transient one dimensional (1D) model was employed, in which a unique numerical method was developed to significantly accelerate the simulation speed of the multi-layer AMR system. As a result, the computation speed of a multi-layer AMR case was very close to the single-layer configuration. The performance of the 16-layer AMR system in different frequencies and utilizations has been investigated using this model. To optimize the layer length distribution of the 16-layer MCMs in the regenerator, a set of 137 simulations with different MCM distributions based on the Design of Experiments (DoE) method was conducted and the results were analyzed. The results show that the 16-layer AMR system can operate up to 84% of Carnot cycle COP at a temperature span of 41 K, which cannot be obtained using an AMR with fewer layers. The DoE results indicate that for a 16-layer AMR system, the uniform distribution is very close to the optimized design.
ERIC Educational Resources Information Center
Rice, Jennifer King
2013-01-01
Teacher experience has long been a central pillar of teacher workforce policies in U.S. school systems. The underlying assumption behind many of these policies is that experience promotes effectiveness, but is this really the case? What does existing evidence tell us about how, why, and for whom teacher experience matters? This policy brief…
Adolescent Sexual Debut and Later Delinquency
ERIC Educational Resources Information Center
Armour, Stacy; Haynie, Dana L.
2007-01-01
Does sexual debut (i.e., experiencing sexual intercourse for the first time) increase the risks of participating in later delinquent behavior? Does this risk increase if adolescents experience early sexual debut relative to the timing experienced by one's peers? Although many factors have been linked to sexual debut, little research has examined…
13 CFR 120.703 - How does an organization apply to become an Intermediary?
Code of Federal Regulations, 2010 CFR
2010-01-01
... area; (6) The applicant's experience and qualifications in providing marketing, management, and... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false How does an organization apply to become an Intermediary? 120.703 Section 120.703 Business Credit and Assistance SMALL BUSINESS...
34 CFR 664.31 - What selection criteria does the Secretary use?
Code of Federal Regulations, 2013 CFR
2013-07-01
... foreign languages and area studies. (iii) The extent to which direct experience abroad is necessary to... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION FULBRIGHT-HAYS GROUP PROJECTS ABROAD PROGRAM How Does the... Fulbright Foreign Scholarship Board Group Projects Abroad for funding under this part. (a) Plan of operation...