Theoretical analysis of Lumry-Eyring models in differential scanning calorimetry
Sanchez-Ruiz, Jose M.
1992-01-01
A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim. PMID:19431826
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
Scheydt, Stefan; Needham, Ian; Behrens, Johann
2017-01-01
Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.
Characteristics of camel-gate structures with active doping channel profiles
NASA Astrophysics Data System (ADS)
Tsai, Jung-Hui; Lour, Wen-Shiung; Laih, Lih-Wen; Liu, Rong-Chau; Liu, Wen-Chau
1996-03-01
In this paper, we demonstrate the influence of channel doping profile on the performances of camel-gate field effect transistors (CAMFETs). For comparison, single and tri-step doping channel structures with identical doping thickness products are employed, while other parameters are kept unchanged. The results of a theoretical analysis show that the single doping channel FET with lightly doping active layer has higher barrier height and drain-source saturation current. However, the transconductance is decreased. For a tri-step doping channel structure, it is found that the output drain-source saturation current and the barrier height are enhanced. Furthermore, the relatively voltage independent performances are improved. Two CAMFETs with single and tri-step doping channel structures have been fabricated and discussed. The devices exhibit nearly voltage independent transconductances of 144 mS mm -1 and 222 mS mm -1 for single and tri-step doping channel CAMFETs, respectively. The operation gate voltage may extend to ± 1.5 V for a tri-step doping channel CAMFET. In addition, the drain current densities of > 750 and 405 mA mm -1 are obtained for the tri-step and single doping CAMFETs. These experimental results are inconsistent with theoretical analysis.
Step Permeability on the Pt(111) Surface
NASA Astrophysics Data System (ADS)
Altman, Michael
2005-03-01
Surface morphology will be affected, or even dictated, by kinetic limitations that may be present during growth. Asymmetric step attachment is recognized to be an important and possibly common cause of morphological growth instabilities. However, the impact of this kinetic limitation on growth morphology may be hindered by other factors such as the rate limiting step and step permeability. This strongly motivates experimental measurements of these quantities in real systems. Using low energy electron microscopy, we have measured step flow velocities in growth on the Pt(111) surface. The dependence of step velocity upon adjacent terrace width clearly shows evidence of asymmetric step attachment and step permeability. Step velocity is modeled by solving the diffusion equation simultaneously on several adjacent terraces subject to boundary conditions at intervening steps that include asymmetric step attachment and step permeability. This analysis allows a quantitative evaluation of step permeability and the kinetic length, which characterizes the rate limiting step continuously between diffusion and attachment-detachment limited regimes. This work provides information that is greatly needed to set physical bounds on the parameters that are used in theoretical treatments of growth. The observation that steps are permeable even on a simple metal surface should also stimulate more experimental measurements and theoretical treatments of this effect.
NASA Astrophysics Data System (ADS)
Liu, Lixian; Mandelis, Andreas; Huan, Huiting; Melnikov, Alexander
2016-10-01
A step-scan differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) using a commercial FTIR spectrometer was developed theoretically and experimentally for air contaminant monitoring. The configuration comprises two identical, small-size and low-resonance-frequency T cells satisfying the conflicting requirements of low chopping frequency and limited space in the sample compartment. Carbon dioxide (CO2) IR absorption spectra were used to demonstrate the capability of the DFTIR-PAS method to detect ambient pollutants. A linear amplitude response to CO2 concentrations from 100 to 10,000 ppmv was observed, leading to a theoretical detection limit of 2 ppmv. The differential mode was able to suppress the coherent noise, thereby imparting the DFTIR-PAS method with a better signal-to-noise ratio and lower theoretical detection limit than the single mode. The results indicate that it is possible to use step-scan DFTIR-PAS with T cells as a quantitative method for high sensitivity analysis of ambient contaminants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Chris, E-mail: cyuan@uwm.edu; Wang, Endong; Zhai, Qiang
Temporal homogeneity of inventory data is one of the major problems in life cycle assessment (LCA). Addressing temporal homogeneity of life cycle inventory data is important in reducing the uncertainties and improving the reliability of LCA results. This paper attempts to present a critical review and discussion on the fundamental issues of temporal homogeneity in conventional LCA and propose a theoretical framework for temporal discounting in LCA. Theoretical perspectives for temporal discounting in life cycle inventory analysis are discussed first based on the key elements of a scientific mechanism for temporal discounting. Then generic procedures for performing temporal discounting inmore » LCA is derived and proposed based on the nature of the LCA method and the identified key elements of a scientific temporal discounting method. A five-step framework is proposed and reported in details based on the technical methods and procedures needed to perform a temporal discounting in life cycle inventory analysis. Challenges and possible solutions are also identified and discussed for the technical procedure and scientific accomplishment of each step within the framework. - Highlights: • A critical review for temporal homogeneity problem of life cycle inventory data • A theoretical framework for performing temporal discounting on inventory data • Methods provided to accomplish each step of the temporal discounting framework.« less
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
Raedeke, Thomas D; Dlugonski, Deirdre
2017-12-01
This study was designed to compare a low versus high theoretical fidelity pedometer intervention applying social-cognitive theory on step counts and self-efficacy. Fifty-six public university employees participated in a 10-week randomized controlled trial with 2 conditions that varied in theoretical fidelity. Participants in the high theoretical fidelity condition wore a pedometer and participated in a weekly group walk followed by a meeting to discuss cognitive-behavioral strategies targeting self-efficacy. Participants in the low theoretical fidelity condition met for a group walk and also used a pedometer as a motivational tool and to monitor steps. Step counts were assessed throughout the 10-week intervention and after a no-treatment follow-up (20 weeks and 30 weeks). Self-efficacy was measured preintervention and postintervention. Participants in the high theoretical fidelity condition increased daily steps by 2,283 from preintervention to postintervention, whereas participants in the low fidelity condition demonstrated minimal change during the same time period (p = .002). Individuals attending at least 80% of the sessions in the high theoretical fidelity condition showed an increase of 3,217 daily steps (d = 1.03), whereas low attenders increased by 925 (d = 0.40). Attendance had minimal impact in the low theoretical fidelity condition. Follow-up data revealed that step counts were at least somewhat maintained. For self-efficacy, participants in the high, compared with those in the low, theoretical fidelity condition showed greater improvements. Findings highlight the importance of basing activity promotion efforts on theory. The high theoretical fidelity intervention that included cognitive-behavioral strategies targeting self-efficacy was more effective than the low theoretical fidelity intervention, especially for those with high attendance.
NASA Astrophysics Data System (ADS)
Liu, Y.; Gao, B.; Gong, M.
2017-06-01
In this paper, we proposed to use step heterojunctions emitter spacer (SHES) and InGaN sub-quantum well in AlGaN/GaN/AlGaN double barrier resonant tunnelling diodes (RTDs). Theoretical analysis of RTD with SHES and InGaN sub-quantum well was presented, which indicated that the negative differential resistance (NDR) characteristic was improved. And the simulation results, peak current density JP=82.67 mA/μm2, the peak-to-valley current ratio PVCR=3.38, and intrinsic negative differential resistance RN=-0.147Ω at room temperature, verified the improvement of NDR characteristic brought about by SHES and InGaN sub-quantum well. Both the theoretical analysis and simulation results showed that the device performance, especially the average oscillator output power presented great improvement and reached 2.77mW/μm2 magnitude. And the resistive cut-off frequency would benefit a lot from the relatively small RN as well. Our works provide an important alternative to the current approaches in designing new structure GaN based RTD for practical high frequency and high power applications.
Exploring patient satisfaction predictors in relation to a theoretical model.
Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil
2013-01-01
The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.
Kuklja, M M; Kotomin, E A; Merkle, R; Mastrikov, Yu A; Maier, J
2013-04-21
Solid oxide fuel cells (SOFC) are under intensive investigation since the 1980's as these devices open the way for ecologically clean direct conversion of the chemical energy into electricity, avoiding the efficiency limitation by Carnot's cycle for thermochemical conversion. However, the practical development of SOFC faces a number of unresolved fundamental problems, in particular concerning the kinetics of the electrode reactions, especially oxygen reduction reaction. We review recent experimental and theoretical achievements in the current understanding of the cathode performance by exploring and comparing mostly three materials: (La,Sr)MnO3 (LSM), (La,Sr)(Co,Fe)O3 (LSCF) and (Ba,Sr)(Co,Fe)O3 (BSCF). Special attention is paid to a critical evaluation of advantages and disadvantages of BSCF, which shows the best cathode kinetics known so far for oxides. We demonstrate that it is the combined experimental and theoretical analysis of all major elementary steps of the oxygen reduction reaction which allows us to predict the rate determining steps for a given material under specific operational conditions and thus control and improve SOFC performance.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Effect of deposition rate and NNN interactions on adatoms mobility in epitaxial growth
NASA Astrophysics Data System (ADS)
Hamouda, Ajmi B. H.; Mahjoub, B.; Blel, S.
2017-07-01
This paper provides a detailed analysis of the surface diffusion problem during epitaxial step-flow growth using a simple theoretical model for the diffusion equation of adatoms concentration. Within this framework, an analytical expression for the adatom mobility as a function of the deposition rate and the Next-Nearest-Neighbor (NNN) interactions is derived and compared with the effective mobility computed from kinetic Monte Carlo (kMC) simulations. As far as the 'small' step velocity or relatively weak deposition rate commonly used for copper growth is concerned, an excellent quantitative agreement with the theoretical prediction is found. The effective adatoms mobility is shown to exhibit an exponential decrease with NNN interactions strength and increases in roughly linear behavior versus deposition rate F. The effective step stiffness and the adatoms mobility are also shown to be closely related to the concentration of kinks.
Nursing management of sensory overload in psychiatry – development of a theoretical framework model
Scheydt, Stefan; Needham, Ian; Nielsen, Gunnar H; Behrens, Johann
2016-09-01
Background: The concept of “removal from stimuli” has already been examined by a Delphi-Study. However, some knowledge gaps remained open, which have now been further investigated. Aim: Examination of the concept “management of sensory overload in inpatient psychiatry” including its sub-concepts and specific measures. Method: Analysis of qualitative data about “removal from stimuli” by content analysis according to Mayring. Results: A theoretical description and definition of the concept could be achieved. In addition, sub-concepts (removal from stimuli, modulation of environmental factors, help somebody to help him-/herself) could be identified, theoretical defined and complemented by possible specific measures. Conclusions: The conceptual descriptions provide a further step to raise awareness of professionals in the subject area. Furthermore, we created a theoretical basis for further empirical studies.
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
Modeling of optical mirror and electromechanical behavior
NASA Astrophysics Data System (ADS)
Wang, Fang; Lu, Chao; Liu, Zishun; Liu, Ai Q.; Zhang, Xu M.
2001-10-01
This paper presents finite element (FE) simulation and theoretical analysis of novel MEMS fiber-optical switches actuated by electrostatic attraction. FE simulation for the switches under static and dynamic loading are first carried out to reveal the mechanical characteristics of the minimum or critical switching voltages, the natural frequencies, mode shapes and response under different levels of electrostatic attraction load. To validate the FE simulation results, a theoretical (or analytical) model is then developed for one specific switch, i.e., Plate_40_104. Good agreement is found between the FE simulation and the analytical results. From both FE simulation and theoretical analysis, the critical switching voltage for Plate_40_104 is derived to be 238 V for the switching angel of 12 degree(s). The critical switching on and off times are 431 microsecond(s) and 67 microsecond(s) , respectively. The present study not only develops good FE and analytical models, but also demonstrates step by step a method to simplify a real optical switch structure with reference to the FE simulation results for analytical purpose. With the FE and analytical models, it is easy to obtain any information about the mechanical behaviors of the optical switches, which are helpful in yielding optimized design.
Day, Charles A.; Kraft, Lewis J.; Kang, Minchul; Kenworthy, Anne K.
2012-01-01
Fluorescence recovery after photobleaching (FRAP) is a powerful, versatile and widely accessible tool to monitor molecular dynamics in living cells that can be performed using modern confocal microscopes. Although the basic principles of FRAP are simple, quantitative FRAP analysis requires careful experimental design, data collection and analysis. In this review we discuss the theoretical basis for confocal FRAP, followed by step-by-step protocols for FRAP data acquisition using a laser scanning confocal microscope for (1) measuring the diffusion of a membrane protein, (2) measuring the diffusion of a soluble protein, and (3) analysis of intracellular trafficking. Finally, data analysis procedures are discussed and an equation for determining the diffusion coefficient of a molecular species undergoing pure diffusion is presented. PMID:23042527
Analysis of high voltage step-up nonisolated DC-DC boost converters
NASA Astrophysics Data System (ADS)
Alisson Alencar Freitas, Antônio; Lessa Tofoli, Fernando; Junior, Edilson Mineiro Sá; Daher, Sergio; Antunes, Fernando Luiz Marcelo
2016-05-01
A high voltage step-up nonisolated DC-DC converter based on coupled inductors suitable to photovoltaic (PV) systems applications is proposed in this paper. Considering that numerous approaches exist to extend the voltage conversion ratio of DC-DC converters that do not use transformers, a detailed comparison is also presented among the proposed converter and other popular topologies such as the conventional boost converter and the quadratic boost converter. The qualitative analysis of the coupled-inductor-based topology is developed so that a design procedure can be obtained, from which an experimental prototype is implemented to validate the theoretical assumptions.
DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS
Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...
NASA Technical Reports Server (NTRS)
Patel, D. K.; Czarnecki, K. R.
1975-01-01
A theoretical investigation of the pressure distributions and drag characteristics was made for forward facing steps in turbulent flow at supersonic speeds. An approximate solution technique proposed by Uebelhack has been modified and extended to obtain a more consistent numerical procedure. A comparison of theoretical calculations with experimental data generally indicated good agreement over the experimentally available range of ratios of step height to boundary layer thickness from 7 to 0.05.
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Wilson, T. G.
1976-01-01
State-plane analysis techniques are employed to study the voltage step up energy storage dc-to-dc converter. Within this framework, an example converter operating under the influence of a constant on time and a constant frequency controller is examined. Qualitative insight gained through this approach is used to develop a conceptual free running control law for the voltage step up converter which can achieve steady state operation in one on/off cycle of control. Digital computer simulation data is presented to illustrate and verify the theoretical discussions presented.
Optical CAD Utilization for the Design and Testing of a LED Streetlamp.
Jafrancesco, David; Mercatelli, Luca; Fontani, Daniela; Sansoni, Paola
2017-08-24
The design and testing of LED lamps are vital steps toward broader use of LED lighting for outdoor illumination and traffic signalling. The characteristics of LED sources, in combination with the need to limit light pollution and power consumption, require a precise optical design. In particular, in every step of the process, it is important to closely compare theoretical or simulated results with measured data (obtained from a prototype). This work examines the various possibilities for using an optical CAD (Lambda Research TracePro ) to design and check a LED lamp for outdoor use. This analysis includes the simulations and testing on a prototype as an example; data acquired by measurement are inserted into the same simulation software, making it easy to compare theoretical and actual results.
Yardley, Sarah; Brosnan, Caragh; Richardson, Jane
2013-01-01
Theoretical integration is a necessary element of study design if clarification of experiential learning is to be achieved. There are few published examples demonstrating how this can be achieved. This methodological article provides a worked example of research methodology that achieved clarification of authentic early experiences (AEEs) through a bi-directional approach to theory and data. Bi-directional refers to our simultaneous use of theory to guide and interrogate empirical data and the use of empirical data to refine theory. We explain the five steps of our methodological approach: (1) understanding the context; (2) critique on existing applications of socio-cultural models to inform study design; (3) data generation; (4) analysis and interpretation and (5) theoretical development through a novel application of Metis. These steps resulted in understanding of how and why different outcomes arose from students participating in AEE. Our approach offers a mechanism for clarification without which evidence-based effective ways to maximise constructive learning cannot be developed. In our example it also contributed to greater theoretical understanding of the influence of social interactions. By sharing this example of research undertaken to develop both theory and educational practice we hope to assist others seeking to conduct similar research.
Vocabulary, Grammar, Sex, and Aging
ERIC Educational Resources Information Center
Moscoso del Prado Martín, Fermín
2017-01-01
Understanding the changes in our language abilities along the lifespan is a crucial step for understanding the aging process both in normal and in abnormal circumstances. Besides controlled experimental tasks, it is equally crucial to investigate language in unconstrained conversation. I present an information-theoretical analysis of a corpus of…
ERIC Educational Resources Information Center
Raedeke, Thomas D.; Dlugonski, Deirdre
2017-01-01
Purpose: This study was designed to compare a low versus high theoretical fidelity pedometer intervention applying social-cognitive theory on step counts and self-efficacy. Method: Fifty-six public university employees participated in a 10-week randomized controlled trial with 2 conditions that varied in theoretical fidelity. Participants in the…
Development and Validation of Cognitive Screening Instruments.
ERIC Educational Resources Information Center
Jarman, Ronald F.
The author suggests that most research on the early detection of learning disabilities is characterisized by an ineffective and a theoretical method of selecting and validating tasks. An alternative technique is proposed, based on a neurological theory of cognitive processes, whereby task analysis is a first step, with empirical analyses as…
Effect of Profilin on Actin Critical Concentration: A Theoretical Analysis
Yarmola, Elena G.; Dranishnikov, Dmitri A.; Bubb, Michael R.
2008-01-01
To explain the effect of profilin on actin critical concentration in a manner consistent with thermodynamic constraints and available experimental data, we built a thermodynamically rigorous model of actin steady-state dynamics in the presence of profilin. We analyzed previously published mechanisms theoretically and experimentally and, based on our analysis, suggest a new explanation for the effect of profilin. It is based on a general principle of indirect energy coupling. The fluctuation-based process of exchange diffusion indirectly couples the energy of ATP hydrolysis to actin polymerization. Profilin modulates this coupling, producing two basic effects. The first is based on the acceleration of exchange diffusion by profilin, which indicates, paradoxically, that a faster rate of actin depolymerization promotes net polymerization. The second is an affinity-based mechanism similar to the one suggested in 1993 by Pantaloni and Carlier although based on indirect rather than direct energy coupling. In the model by Pantaloni and Carlier, transformation of chemical energy of ATP hydrolysis into polymerization energy is regulated by direct association of each step in the hydrolysis reaction with a corresponding step in polymerization. Thus, hydrolysis becomes a time-limiting step in actin polymerization. In contrast, indirect coupling allows ATP hydrolysis to lag behind actin polymerization, consistent with experimental results. PMID:18835900
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
NASA Astrophysics Data System (ADS)
He, Jing-bo; Ding, Jian; Feng, Li; Ren, Jian-wen; Tang, Wei; Yang, Cheng; Wang, Jing-jin; Song, Yun-ting
2017-05-01
The widely employment of power electronic equipment in modern power system, may affect grid structure and system operation because of their diverse dynamic characteristics. In this paper, the impact of the static var compensators (SVC) on out-of-step oscillation is investigated based on the equal area criterion by considering SVC’s admittance effect. Firstly, the variation pattern of bus voltage which is connected to SVC is concluded. Then the derivation of equation considering the admittance effect is given, which explains the ability of SVC to suppress out-of-step oscillation. SVC’s impact on migration of out-of-step oscillation centre (OSOC) is discussed based on the expression of OSOC’s electrical location. Moreover, the influence of SVC’s response speed and capacity on its effect are presented by qualitative analysis. Finally, simulations on a two-end equivalent test system are carried out to verify the correctness of the theoretical analysis. It is found that the capacity and a response speed of SVC have significant effect on the out-of-step oscillation, while SVC have no d istinct influence on location of OSOC.
2011-01-01
Summary A simple, efficient, and mild procedure for a solvent-free one-step synthesis of various 4,4′-diaminotriarylmethane derivatives in the presence of antimony trichloride as catalyst is described. Triarylmethane derivatives were prepared in good to excellent yields and characterized by elemental analysis, FTIR, 1H and 13C NMR spectroscopic techniques. The structural and vibrational analysis were investigated by performing theoretical calculations at the HF and DFT levels of theory by standard 6-31G*, 6-31G*/B3LYP, and B3LYP/cc-pVDZ methods and good agreement was obtained between experimental and theoretical results. PMID:21445373
Optical method for measuring the surface area of a threaded fastener
Douglas Rammer; Samuel Zelinka
2010-01-01
This article highlights major aspects of a new optical technique to determine the surface area of a threaded fastener; the theoretical framework has been reported elsewhere. Specifically, this article describes general surface area expressions used in the analysis, details of image acquisition system, and major image processing steps contained within the measurement...
USDA-ARS?s Scientific Manuscript database
An essential step to understanding the genomic biology of any organism is to comprehensively survey its transcriptome. We present the Bovine Gene Atlas (BGA) a compendium of over 7.2 million unique 20 base Illumina DGE tags representing 100 tissue transcriptomes collected primarily from L1 Dominette...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito, R.M.; Nozik, A.J.
1985-07-18
A kinetic model was developed to describe the effects of light intensity on the photocorrosion of n-type semiconductor electrodes. The model is an extension of previous work by Gomes and co-workers that includes the possibility of multiple steps for the oxidation reaction of the reducing agent in the electrolyte. Six cases are considered where the semiconductor decomposition reaction is multistep (each step involves a hole); the oxidation reaction of the reducing agent is multistep (each step after the first involves a hole or a chemical intermediate), and the first steps of the competing oxidation reactions are reversible or irreversible. Itmore » was found, contrary to previous results, that the photostability of semiconductor electrodes could increase with increased light intensity if the desired oxidation reaction of the reducing agent in the electrolyte was multistep with the first step being reversible. 14 references, 5 figures, 1 table.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Integrating 3D Printing into an Early Childhood Teacher Preparation Course: Reflections on Practice
ERIC Educational Resources Information Center
Sullivan, Pamela; McCartney, Holly
2017-01-01
This reflection on practice describes a case study integrating 3D printing into a creativity course for preservice teachers. The theoretical rationale is discussed, and the steps for integration are outlined. Student responses and reflections on the experience provide the basis for our analysis. Examples and resources are provided, as well as a…
ERIC Educational Resources Information Center
Voutsina, Chronoula
2016-01-01
Empirical research has documented how children's early counting develops into an increasingly abstract process, and initial counting procedures are reified as children develop and use more sophisticated counting. In this development, the learning of different oral counting sequences that allow children to count in steps bigger than one is seen as…
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
Seven Basic Steps to Solving Ethical Dilemmas in Special Education: A Decision-Making Framework
ERIC Educational Resources Information Center
Stockall, Nancy; Dennis, Lindsay R.
2015-01-01
This article presents a seven-step framework for decision making to solve ethical issues in special education. The authors developed the framework from the existing literature and theoretical frameworks of justice, critique, care, and professionalism. The authors briefly discuss each theoretical framework and then describe the decision-making…
NASA Astrophysics Data System (ADS)
Rohart, François
2017-01-01
In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.
Aspects of CO2 laser engraving of printing cylinders.
Atanasov, P A; Maeno, K; Manolov, V P
1999-03-20
Results of the experimental and theoretical investigations of CO(2) laser-engraved cylinders are presented. The processed surfaces of test samples are examined by a phase-stepping laser interferometer, digital microscope, and computer-controlled profilometer. Fourier analysis is made on the patterns parallel to the axis of the laser-scribed test ceramic cylinders. The problem of the visually observed banding is discussed.
ERIC Educational Resources Information Center
Carver, John; Carver, Miriam Mayhew
This guide provides practical advice regarding implementation of the Policy Governance model for school boards. Chapter 1, "Setting the Stage," explores questions commonly raised by boards prior to implementation of the Policy Governance model. Chapter 2, "The Theoretical Foundation," reviews the key theoretical principles of…
Learning About Conflict and Conflict Management Through Drama in Nursing Education.
Arveklev, Susanna H; Berg, Linda; Wigert, Helena; Morrison-Helme, Morag; Lepp, Margret
2018-04-01
In the health care settings in which nurses work, involvement in some form of conflict is inevitable. The ability to manage conflicts is therefore necessary for nursing students to learn during their education. A qualitative analysis of 43 written group assignments was undertaken using a content analysis approach. Three main categories emerged in the analysis-to approach and integrate with the theoretical content, to step back and get an overview, and to concretize and practice-together with the overall theme, to learn by oscillating between closeness and distance. Learning about conflict and conflict management through drama enables nursing students to form new knowledge by oscillating between closeness and distance, to engage in both the fictional world and the real world at the same time. This helps students to form a personal understanding of theoretical concepts and a readiness about how to manage future conflicts. [J Nurs Educ. 2018;57(4):209-216.]. Copyright 2018, SLACK Incorporated.
Valente, Thomas W; Pitts, Stephanie R
2017-03-20
The use of social network theory and analysis methods as applied to public health has expanded greatly in the past decade, yielding a significant academic literature that spans almost every conceivable health issue. This review identifies several important theoretical challenges that confront the field but also provides opportunities for new research. These challenges include (a) measuring network influences, (b) identifying appropriate influence mechanisms, (c) the impact of social media and computerized communications, (d) the role of networks in evaluating public health interventions, and (e) ethics. Next steps for the field are outlined and the need for funding is emphasized. Recently developed network analysis techniques, technological innovations in communication, and changes in theoretical perspectives to include a focus on social and environmental behavioral influences have created opportunities for new theory and ever broader application of social networks to public health topics.
Research in Computational Astrobiology
NASA Technical Reports Server (NTRS)
Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.
2003-01-01
We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.
NASA Astrophysics Data System (ADS)
Wang, Anbo; Miller, Mark S.; Gunther, Michael F.; Murphy, Kent A.; Claus, Richard O.
1993-03-01
A self-referencing technique compensating for fiber losses and source fluctuations in air-gap intensity-based optical fiber sensors is described and demonstrated. A resolution of 0.007 micron has been obtained over a measurement range of 0-250 microns for an intensity-based displacement sensor using this referencing technique. The sensor is shown to have minimal sensitivity to fiber bending losses and variations in the LED input power. A theoretical model for evaluation of step-index multimode optical fiber splice is proposed. The performance of the sensor as a displacement sensor agrees well with the theoretical analysis.
Theory of step on leading edge of negative corona current pulse
NASA Astrophysics Data System (ADS)
Gupta, Deepak K.; Mahajan, Sangeeta; John, P. I.
2000-03-01
Theoretical models taking into account different feedback source terms (e.g., ion-impact electron emission, photo-electron emission, field emission, etc) have been proposed for the existence and explanation of the shape of negative corona current pulse, including the step on the leading edge. In the present work, a negative corona current pulse with the step on the leading edge is obtained in the presence of ion-impact electron emission feedback source only. The step on the leading edge is explained in terms of the plasma formation process and enhancement of the feedback source. Ionization wave-like movement toward the cathode is observed after the step. The conditions for the existence of current pulse, with and without the step on the leading edge, are also described. A qualitative comparison with earlier theoretical and experimental work is also included.
Studies in Non-Equilibrium Statistical Mechanics.
1982-09-01
in the formalism, and this is used to simulate the effects of rotational states and collisions. At each stochastic step the energy changes in the...uses of this method. 10. A Scaling Theoretical Analysis of Vibrational Relaxation Experiments: Rotational Effects and Long-Range Collisions 0...in- elude rotational effects through the rotational energy gaps and the rotational distributions. The variables in this theory are a fundamental set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duwel, A.E.; Watanabe, S.; Trias, E.
1997-11-01
New resonance steps are found in the experimental current-voltage characteristics of long, discrete, one-dimensional Josephson junction arrays with open boundaries and in an external magnetic field. The junctions are underdamped, connected in parallel, and dc biased. Numerical simulations based on the discrete sine-Gordon model are carried out, and show that the solutions on the steps are periodic trains of fluxons, phase locked by a finite amplitude radiation. Power spectra of the voltages consist of a small number of harmonic peaks, which may be exploited for possible oscillator applications. The steps form a family that can be numbered by the harmonicmore » content of the radiation, the first member corresponding to the Eck step. Discreteness of the arrays is shown to be essential for appearance of the higher order steps. We use a multimode extension of the harmonic balance analysis, and estimate the resonance frequencies, the ac voltage amplitudes, and the theoretical limit on the output power on the first two steps. {copyright} {ital 1997 American Institute of Physics.}« less
Developing a theoretical framework for complex community-based interventions.
Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana
2014-01-01
Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Analysis of simple 2-D and 3-D metal structures subjected to fragment impact
NASA Technical Reports Server (NTRS)
Witmer, E. A.; Stagliano, T. R.; Spilker, R. L.; Rodal, J. J. A.
1977-01-01
Theoretical methods were developed for predicting the large-deflection elastic-plastic transient structural responses of metal containment or deflector (C/D) structures to cope with rotor burst fragment impact attack. For two-dimensional C/D structures both, finite element and finite difference analysis methods were employed to analyze structural response produced by either prescribed transient loads or fragment impact. For the latter category, two time-wise step-by-step analysis procedures were devised to predict the structural responses resulting from a succession of fragment impacts: the collision force method (CFM) which utilizes an approximate prediction of the force applied to the attacked structure during fragment impact, and the collision imparted velocity method (CIVM) in which the impact-induced velocity increment acquired by a region of the impacted structure near the impact point is computed. The merits and limitations of these approaches are discussed. For the analysis of 3-d responses of C/D structures, only the CIVM approach was investigated.
Two-step complete polarization logic Bell-state analysis.
Sheng, Yu-Bo; Zhou, Lan
2015-08-26
The Bell state plays a significant role in the fundamental tests of quantum mechanics, such as the nonlocality of the quantum world. The Bell-state analysis is of vice importance in quantum communication. Existing Bell-state analysis protocols usually focus on the Bell-state encoding in the physical qubit directly. In this paper, we will describe an alternative approach to realize the near complete logic Bell-state analysis for the polarized concatenated Greenberger-Horne-Zeilinger (C-GHZ) state with two logic qubits. We show that the logic Bell-state can be distinguished in two steps with the help of the parity-check measurement (PCM) constructed by the cross-Kerr nonlinearity. This approach can be also used to distinguish arbitrary C-GHZ state with N logic qubits. As both the recent theoretical and experiment work showed that the C-GHZ state has its robust feature in practical noisy environment, this protocol may be useful in future long-distance quantum communication based on the logic-qubit entanglement.
The lost steps of infancy: symbolization, analytic process and the growth of the self.
Feldman, Brian
2002-07-01
In 'The Lost Steps' the Latin American novelist Alejo Carpentier describes the search by the protagonist for the origins of music among native peoples in the Amazon jungle. This metaphor can be utilized as a way of understanding the search for the pre-verbal origins of the self in analysis. The infant's experience of the tempo and rhythmicity of the mother/infant interaction and the bathing in words and sounds of the infant by the mother are at the core of the infant's development of the self. The infant observation method (Tavistock model) will be looked at as a way of developing empathy in the analyst to better understand infantile, pre-verbal states of mind. A case vignette from an adult analysis will be utilized to illustrate the theoretical concepts.
Theoretical NMR correlations based Structure Discussion.
Junker, Jochen
2011-07-28
The constitutional assignment of natural products by NMR spectroscopy is usually based on 2D NMR experiments like COSY, HSQC, and HMBC. The actual difficulty of the structure elucidation problem depends more on the type of the investigated molecule than on its size. The moment HMBC data is involved in the process or a large number of heteroatoms is present, a possibility of multiple solutions fitting the same data set exists. A structure elucidation software can be used to find such alternative constitutional assignments and help in the discussion in order to find the correct solution. But this is rarely done. This article describes the use of theoretical NMR correlation data in the structure elucidation process with WEBCOCON, not for the initial constitutional assignments, but to define how well a suggested molecule could have been described by NMR correlation data. The results of this analysis can be used to decide on further steps needed to assure the correctness of the structural assignment. As first step the analysis of the deviation of carbon chemical shifts is performed, comparing chemical shifts predicted for each possible solution with the experimental data. The application of this technique to three well known compounds is shown. Using NMR correlation data alone for the description of the constitutions is not always enough, even when including 13C chemical shift prediction.
ERIC Educational Resources Information Center
Sanders, Matthew; Mazzucchelli, Trevor; Studman, Lisa
2004-01-01
Stepping Stones Triple P is the first in a series of programs based on the Triple P--Positive Parenting Program that has been specifically designed for families who have a child with a disability. This paper presents the rationale, theoretical foundations, historical development and distinguishing features of the program. The multi-level…
Dancing with data: an example of acquiring theoretical sensitivity in a grounded theory study.
Hoare, Karen J; Mills, Jane; Francis, Karen
2012-06-01
Glaser suggested that the conceptual route from data collection to a grounded theory is a set of double back steps. The route forward inevitably results in the analyst stepping back. Additionally sidestepping through, leading participants down lines of inquiry and following data threads with other participants, is also characteristic of acquiring theoretical sensitivity, a key concept in grounded theory. Other ways of acquiring theoretical sensitivity comprise: reading the literature, open coding, category building, reflecting in memos followed by doubling back on data collection once further lines of inquiry are opened up. This paper describes how we 'danced with data' in pursuit of heightened theoretical sensitivity in a grounded theory study of information use by nurses working in general practice in New Zealand. Providing an example of how analytical tools are employed to theoretically sample emerging concepts. © 2012 Blackwell Publishing Asia Pty Ltd.
Fractal analysis of multiscale spatial autocorrelation among point data
De Cola, L.
1991-01-01
The analysis of spatial autocorrelation among point-data quadrats is a well-developed technique that has made limited but intriguing use of the multiscale aspects of pattern. In this paper are presented theoretical and algorithmic approaches to the analysis of aggregations of quadrats at or above a given density, in which these sets are treated as multifractal regions whose fractal dimension, D, may vary with phenomenon intensity, scale, and location. The technique is illustrated with Matui's quadrat house-count data, which yield measurements consistent with a nonautocorrelated simulated Poisson process but not with an orthogonal unit-step random walk. The paper concludes with a discussion of the implications of such analysis for multiscale geographic analysis systems. -Author
NASA Astrophysics Data System (ADS)
Lei, Chen; Pan, Zhang; Jianxiong, Chen; Tu, Yiliu
2018-04-01
The plasma brightness cannot be used as a direct indicator of ablation depth detection by femtosecond laser was experimentally demonstrated, which led to the difficulty of depth measurement in the maching process. The tests of microchannel milling on the silicon wafer were carried out in the micromachining center in order to obtain the influences of parameters on the ablation depth. The test results showed that the defocusing distance had no significant impact on ablation depth in LAV effective range. Meanwhile, the reason of this was explained in this paper based on the theoretical analysis and simulation calculation. Then it was proven that the ablation depth mainly depends on laser fluence, step distance and scanning velocity. Finally, a research was further carried out to study the laser parameters which relate with the microchannel ablation depth inside the quartz glass for more efficiency and less cost in processing by femtosecond laser.
Peer Interventions to Promote Health: Conceptual Considerations
Simoni, Jane M.; Franks, Julie C.; Lehavot, Keren; Yard, Samantha S.
2013-01-01
Peers have intervened to promote health since ancient times, yet few attempts have been made to describe theoretically their role and their interventions. After a brief overview of the history and variety of peer-based health interventions, a 4-part definition of peer interveners is presented here with a consideration of the dimensions of their involvement in health promotion. Then, a 2-step process is proposed as a means of conceptualizing peer interventions to promote health. Step 1 involves establishing a theoretical framework for the intervention’s main focus (i.e., education, social support, social norms, self-efficacy, and patient advocacy), and Step 2 involves identifying a theory that justifies the use of peers and might explain their impact. As examples, the following might be referred to: theoretical perspectives from the mutual support group and self-help literature, social cognitive and social learning theories, the social support literature, social comparison theory, social network approaches, and empowerment models. PMID:21729015
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Bidirectional converter for high-efficiency fuel cell powertrain
NASA Astrophysics Data System (ADS)
Fardoun, Abbas A.; Ismail, Esam H.; Sabzali, Ahmad J.; Al-Saffar, Mustafa A.
2014-03-01
In this paper, a new wide conversion ratio step-up and step-down converter is presented. The proposed converter is derived from the conventional Single Ended Primary Inductor Converter (SEPIC) topology and it is integrated with a capacitor-diode voltage multiplier, which offers a simple structure, reduced electromagnetic interference (EMI), and reduced semiconductors' voltage stresses. Other advantages include: continuous input and output current, extended step-up and step-down voltage conversion ratio without extreme low or high duty-cycle, simple control circuitry, and near-zero input and output ripple currents compared to other converter topologies. The low charging/discharging current ripple and wide gain features result in a longer life-span and lower cost of the energy storage battery system. In addition, the "near-zero" ripple capability improves the fuel cell durability. Theoretical analysis results obtained with the proposed structure are compared with other bi-direction converter topologies. Simulation and experimental results are presented to verify the performance of the proposed bi-directional converter.
Dissociative Ionization of Benzene by Electron Impact
NASA Technical Reports Server (NTRS)
Huo, Winifred; Dateo, Christopher; Kwak, Dochan (Technical Monitor)
2002-01-01
We report a theoretical study of the dissociative ionization (DI) of benzene from the low-lying ionization channels. Our approach makes use of the fact that electron motion is much faster than nuclear motion and DI is treated as a two-step process. The first step is electron-impact ionization resulting in an ion with the same nuclear geometry as the neutral molecule. In the second step the nuclei relax from the initial geometry and undergo unimolecular dissociation. For the ionization process we use the improved binary-encounter dipole (iBED) model. For the unimolecular dissociation step, we study the steepest descent reaction path to the minimum of the ion potential energy surface. The path is used to analyze the probability of unimolecular dissociation and to determine the product distributions. Our analysis of the dissociation products and the thresholds of the productions are compared with the result dissociative photoionization measurements of Feng et al. The partial oscillator strengths from Feng et al. are then used in the iBED cross section calculations.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H.
2016-01-01
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536
AMOEBA clustering revisited. [cluster analysis, classification, and image display program
NASA Technical Reports Server (NTRS)
Bryant, Jack
1990-01-01
A description of the clustering, classification, and image display program AMOEBA is presented. Using a difficult high resolution aircraft-acquired MSS image, the steps the program takes in forming clusters are traced. A number of new features are described here for the first time. Usage of the program is discussed. The theoretical foundation (the underlying mathematical model) is briefly presented. The program can handle images of any size and dimensionality.
A Stakeholder Analysis of the Navy’s Thirty-Year Shipbuilding Plan
2007-12-01
new.items/d05183.pdf Mankiw , N. Gregory (2004). Principles of economics (3rd ed.). Mason, OH: Thomson South-Western. McCaffery, J.L. and Jones, L.R...Shipbuilding Association, 2007). 11 III. LITERATURE REVIEW AND THEORETICAL FRAMEWORK Stakeholder theory confronts the traditional economic model of the... Mankiw , 2004). By completing these steps, the manager attains answers to basic questions about the stakeholder (Freeman, 1984): Over the long run
NASA Astrophysics Data System (ADS)
Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria
2017-04-01
Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.
Implementation of a cost-accounting model in a biobank: practical implications.
Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C
2014-01-01
Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.
NASA Astrophysics Data System (ADS)
Loganathan, B.; Chandraboss, V. L.; Senthilvelan, S.; Karthikeyan, B.
2016-01-01
We present a detailed analysis of surface-enhanced Raman scattering of 7-azaindole and L-cysteine adsorbed on a tailored Rh surface by using experimental and density functional theoretical (DFT) calculations. DFT with the B3LYP/Lanl2DZ basis set was used for the optimization of the ground state geometries and simulation of the surface-enhanced Raman spectrum of probe molecules adsorbed on Rh6 cluster. 7-azaindole and L-cysteine adsorption at the shell interface was ascertained from first-principles. In addition, characterization of synthesized trimetallic AuPt core/Rh shell colloidal nanocomposites has been analyzed by UV-visible spectroscopy, high-resolution transmission and scanning electron microscopy, selected area electron diffraction pattern analysis, energy-dispersive X-ray spectroscopy, atomic force, confocal Raman microscopy, FT-Raman and surface-enhanced Raman spectroscopic analysis. This analysis serves as the first step in gaining an accurate understanding of specific interactions at the interface of organic and biomolecules and to gain knowledge on the surface composition of trimetallic Au/Pt/Rh colloidal nanocomposites.
NASA Astrophysics Data System (ADS)
Suhartono, Lee, Muhammad Hisyam; Rezeki, Sri
2017-05-01
Intervention analysis is a statistical model in the group of time series analysis which is widely used to describe the effect of an intervention caused by external or internal factors. An example of external factors that often occurs in Indonesia is a disaster, both natural or man-made disaster. The main purpose of this paper is to provide the results of theoretical studies on identification step for determining the order of multi inputs intervention analysis for evaluating the magnitude and duration of the impact of interventions on time series data. The theoretical result showed that the standardized residuals could be used properly as response function for determining the order of multi inputs intervention model. Then, these results are applied for evaluating the impact of a disaster on a real case in Indonesia, i.e. the magnitude and duration of the impact of the Lapindo mud on the volume of vehicles on the highway. Moreover, the empirical results showed that the multi inputs intervention model can describe and explain accurately the magnitude and duration of the impact of disasters on a time series data.
The no conclusion intervention for couples in conflict.
Migerode, Lieven
2014-07-01
Dealing with difference is central to all couple therapy. This article presents an intervention designed to assist couples in handling conflict. Central to this approach is the acceptance that most conflicts cannot be solved. Couples are in need of a different understanding of couples conflict. This understanding is found in the analysis of love in context and in relational dialectics. Couples are guided through different steps: deciding on the valence of the issue as individuals, helping them decide which differences can be resolved and which issues demand new ways of living with the inevitable, and the introduction in the suggested no conclusion dialogue. This article briefly describes the five day intensive couple therapy program, in which the no intervention is embedded. The theoretical foundation of the intervention, followed by the step by step description of the intervention forms the major part of the article. A case vignette illustrates this approach. © 2012 American Association for Marriage and Family Therapy.
NASA Astrophysics Data System (ADS)
Elishakoff, I.; Sarlin, N.
2016-06-01
In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Chou, Ting-Chao
2006-09-01
The median-effect equation derived from the mass-action law principle at equilibrium-steady state via mathematical induction and deduction for different reaction sequences and mechanisms and different types of inhibition has been shown to be the unified theory for the Michaelis-Menten equation, Hill equation, Henderson-Hasselbalch equation, and Scatchard equation. It is shown that dose and effect are interchangeable via defined parameters. This general equation for the single drug effect has been extended to the multiple drug effect equation for n drugs. These equations provide the theoretical basis for the combination index (CI)-isobologram equation that allows quantitative determination of drug interactions, where CI < 1, = 1, and > 1 indicate synergism, additive effect, and antagonism, respectively. Based on these algorithms, computer software has been developed to allow automated simulation of synergism and antagonism at all dose or effect levels. It displays the dose-effect curve, median-effect plot, combination index plot, isobologram, dose-reduction index plot, and polygonogram for in vitro or in vivo studies. This theoretical development, experimental design, and computerized data analysis have facilitated dose-effect analysis for single drug evaluation or carcinogen and radiation risk assessment, as well as for drug or other entity combinations in a vast field of disciplines of biomedical sciences. In this review, selected examples of applications are given, and step-by-step examples of experimental designs and real data analysis are also illustrated. The merging of the mass-action law principle with mathematical induction-deduction has been proven to be a unique and effective scientific method for general theory development. The median-effect principle and its mass-action law based computer software are gaining increased applications in biomedical sciences, from how to effectively evaluate a single compound or entity to how to beneficially use multiple drugs or modalities in combination therapies.
The 5-Step Method: Principles and Practice
ERIC Educational Resources Information Center
Copello, Alex; Templeton, Lorna; Orford, Jim; Velleman, Richard
2010-01-01
This article includes a description of the 5-Step Method. First, the origins and theoretical basis of the method are briefly described. This is followed by a discussion of the general principles that guide the delivery of the method. Each step is then described in more detail, including the content and focus of each of the five steps that include:…
Abrupt climate change and extinction events
NASA Technical Reports Server (NTRS)
Crowley, Thomas J.
1988-01-01
There is a growing body of theoretical and empirical support for the concept of instabilities in the climate system, and indications that abrupt climate change may in some cases contribute to abrupt extinctions. Theoretical indications of instabilities can be found in a broad spectrum of climate models (energy balance models, a thermohaline model of deep-water circulation, atmospheric general circulation models, and coupled ocean-atmosphere models). Abrupt transitions can be of several types and affect the environment in different ways. There is increasing evidence for abrupt climate change in the geologic record and involves both interglacial-glacial scale transitions and the longer-term evolution of climate over the last 100 million years. Records from the Cenozoic clearly show that the long-term trend is characterized by numerous abrupt steps where the system appears to be rapidly moving to a new equilibrium state. The long-term trend probably is due to changes associated with plate tectonic processes, but the abrupt steps most likely reflect instabilities in the climate system as the slowly changing boundary conditions caused the climate to reach some threshold critical point. A more detailed analysis of abrupt steps comes from high-resolution studies of glacial-interglacial fluctuations in the Pleistocene. Comparison of climate transitions with the extinction record indicates that many climate and biotic transitions coincide. The Cretaceous-Tertiary extinction is not a candidate for an extinction event due to instabilities in the climate system. It is quite possible that more detailed comparisons and analysis will indicate some flaws in the climate instability-extinction hypothesis, but at present it appears to be a viable candidate as an alternate mechanism for causing abrupt environmental changes and extinctions.
Quantitation in chiral capillary electrophoresis: theoretical and practical considerations.
D'Hulst, A; Verbeke, N
1994-06-01
Capillary electrophoresis (CE) represents a decisive step forward in stereoselective analysis. The present paper deals with the theoretical aspects of the quantitation of peak separation in chiral CE. Because peak shape is very different in CE with respect to high performance liquid chromatography (HPLC), the resolution factor Rs, commonly used to describe the extent of separation between enantiomers as well as unrelated compounds, is demonstrated to be of limited value for the assessment of chiral separations in CE. Instead, the conjunct use of a relative chiral separation factor (RCS) and the percent chiral separation (% CS) is advocated. An array of examples is given to illustrate this. The practical aspects of method development using maltodextrins--which have been proposed previously as a major innovation in chiral selectors applicable in CE--are documented with the stereoselective analysis of coumarinic anticoagulant drugs. The possibilities of quantitation using CE were explored under two extreme conditions. Using ibuprofen, it has been demonstrated that enantiomeric excess determinations are possible down to a 1% level of optical contamination and stereoselective determinations are still possible with a good precision near the detection limit, increasing sample load by very long injection times. The theoretical aspects of this possibility are addressed in the discussion.
Sensory overload: A concept analysis.
Scheydt, Stefan; Müller Staub, Maria; Frauenfelder, Fritz; Nielsen, Gunnar H; Behrens, Johann; Needham, Ian
2017-04-01
In the context of mental disorders sensory overload is a widely described phenomenon used in conjunction with psychiatric interventions such as removal from stimuli. However, the theoretical foundation of sensory overload as addressed in the literature can be described as insufficient and fragmentary. To date, the concept of sensory overload has not yet been sufficiently specified or analyzed. The aim of the study was to analyze the concept of sensory overload in mental health care. A literature search was undertaken using specific electronic databases, specific journals and websites, hand searches, specific library catalogues, and electronic publishing databases. Walker and Avant's method of concept analysis was used to analyze the sources included in the analysis. All aspects of the method of Walker and Avant were covered in this concept analysis. The conceptual understanding has become more focused, the defining attributes, influencing factors and consequences are described and empirical referents identified. The concept analysis is a first step in the development of a middle-range descriptive theory of sensory overload based on social scientific and stress-theoretical approaches. This specification may serve as a fundament for further research, for the development of a nursing diagnosis or for guidelines. © 2017 Australian College of Mental Health Nurses Inc.
NASA Astrophysics Data System (ADS)
Prasai, Binay; Wilson, A. R.; Wiley, B. J.; Ren, Y.; Petkov, Valeri
2015-10-01
The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modelling methodology. See DOI: 10.1039/c5nr04678e
Design principles and optimal performance for molecular motors under realistic constraints
NASA Astrophysics Data System (ADS)
Tu, Yuhai; Cao, Yuansheng
2018-02-01
The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.
Tian, Xiaohai; Huang, Yongping; Huang, Zhimin; Lei, Weici; Higata, Shugo
2003-10-01
The searching for a proper land reclamation and utilization method adapted to the regional natural conditions and economical level is a prime subject in the waterlogged area of Southern China. Choosing a dish-like micro-zone, one of the typical waterlogged areas deprived from a reclaimed lake as the studying region, its biophysical characteristics and developmental models was investigated, aiming at making a comprehensive development plan to this area. The results showed that with the successive change in altitude across the sector of the land, the soil type, soil profile structure, underground water level, and soil temperature were characterized by five step divergence steps. The analysis on the site and area of the individual divergences showed that the low land was unsuitable for rice planting, and the land between upland and paddy should be increased for rotation and needed to be reclaimed better. After an engineering consolidation to the land, the original five divergence steps were rehabilitated into four steps, and a utilization model of "development in a step way" focused on comprehensively agricultural development and improvement in farming systems was developed, which leaded to a great advance in economic profit of this area.
Qualitative Descriptive Methods in Health Science Research.
Colorafi, Karen Jiggins; Evans, Bronwynne
2016-07-01
The purpose of this methodology paper is to describe an approach to qualitative design known as qualitative descriptive that is well suited to junior health sciences researchers because it can be used with a variety of theoretical approaches, sampling techniques, and data collection strategies. It is often difficult for junior qualitative researchers to pull together the tools and resources they need to embark on a high-quality qualitative research study and to manage the volumes of data they collect during qualitative studies. This paper seeks to pull together much needed resources and provide an overview of methods. A step-by-step guide to planning a qualitative descriptive study and analyzing the data is provided, utilizing exemplars from the authors' research. This paper presents steps to conducting a qualitative descriptive study under the following headings: describing the qualitative descriptive approach, designing a qualitative descriptive study, steps to data analysis, and ensuring rigor of findings. The qualitative descriptive approach results in a summary in everyday, factual language that facilitates understanding of a selected phenomenon across disciplines of health science researchers. © The Author(s) 2016.
Theoretical analysis of factors controlling the nonalternating CO/C(2)H(4) copolymerization.
Haras, Alicja; Michalak, Artur; Rieger, Bernhard; Ziegler, Tom
2005-06-22
A [P-O]Pd catalyst based on o-alkoxy derivatives of diphenylphosphinobenzene sulfonic acid (I) has recently been shown by Drent et al. to perform nonalternating CO/C(2)H(4) copolymerization with subsequent incorporation of ethylene units into the polyketone chain. The origin of the nonalternation is investigated in a theoretical study of I, where calculated activation barriers and reaction heats of all involved elementary steps are used to generate a complete kinetic model. The kinetic model is able to account for the observed productivity and degree of nonalternation as a function of temperature. Consistent with the energy changes obtained for the real catalyst model, the selectivity toward a nonalternating distribution of both comonomers appears to be mainly a result of a strong destabilization of the Pd-acyl complex.
Armendáriz-Vidales, Georgina; Frontana, Carlos
2014-09-07
An electrochemical and theoretical analysis of a series of shikonin derivatives in aprotic media is presented. Results showed that the first electrochemical reduction signal is a reversible monoelectronic transfer, generating a stable semiquinone intermediate; the corresponding E(I)⁰ values were correlated with calculated values of electroaccepting power (ω(+)) and adiabatic electron affinities (A(Ad)), obtained with BH and HLYP/6-311++G(2d,2p) and considering the solvent effect, revealing the influence of intramolecular hydrogen bonding and the substituting group at position C-2 in the experimental reduction potential. For the second reduction step, esterified compounds isobutyryl and isovalerylshikonin presented a coupled chemical reaction following dianion formation. Analysis of the variation of the dimensionless cathodic peak potential values (ξ(p)) as a function of the scan rate (v) functions and complementary experiments in benzonitrile suggested that this process follows a dissociative electron transfer, in which the rate of heterogeneous electron transfer is slow (~0.2 cm s(-1)), and the rate constant of the chemical process is at least 10(5) larger.
Computational crystallization.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H
2016-07-15
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Low Temperature Kinetics of the First Steps of Water Cluster Formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourgalais, J.; Roussel, V.; Capron, M.
2016-03-01
We present a combined experimental and theoretical low temperature kinetic study of water cluster formation. Water cluster growth takes place in low temperature (23-69 K) supersonic flows. The observed kinetics of formation of water clusters are reproduced with a kinetic model based on theoretical predictions for the first steps of clusterization. The temperature-and pressure-dependent association and dissociation rate coefficients are predicted with an ab initio transition state theory based master equation approach over a wide range of temperatures (20-100 K) and pressures (10(-6) - 10 bar).
Integrating Behavioral Health Support into a Pediatric Setting: What Happens in the Exam Room?
ERIC Educational Resources Information Center
Cuno, Kate; Krug, Laura M.; Umylny, Polina
2015-01-01
This article presents an overview of the Healthy Steps for Young Children (Healthy Steps) program at Montefiore Medical Center, in the Bronx, NY. The authors review the theoretical underpinnings of this national program for the promotion of early childhood mental health. The Healthy Steps program at Montefiore is integrated into outpatient…
[Preliminarily application of content analysis to qualitative nursing data].
Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang
2012-10-01
Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.
Predictors of posttraumatic stress symptoms following childbirth
2014-01-01
Background Posttraumatic stress disorder (PTSD) following childbirth has gained growing attention in the recent years. Although a number of predictors for PTSD following childbirth have been identified (e.g., history of sexual trauma, emergency caesarean section, low social support), only very few studies have tested predictors derived from current theoretical models of the disorder. This study first aimed to replicate the association of PTSD symptoms after childbirth with predictors identified in earlier research. Second, cognitive predictors derived from Ehlers and Clark’s (2000) model of PTSD were examined. Methods N = 224 women who had recently given birth completed an online survey. In addition to computing single correlations between PTSD symptom severities and variables of interest, in a hierarchical multiple regression analyses posttraumatic stress symptoms were predicted by (1) prenatal variables, (2) birth-related variables, (3) postnatal social support, and (4) cognitive variables. Results Wellbeing during pregnancy and age were the only prenatal variables contributing significantly to the explanation of PTSD symptoms in the first step of the regression analysis. In the second step, the birth-related variables peritraumatic emotions and wellbeing during childbed significantly increased the explanation of variance. Despite showing significant bivariate correlations, social support entered in the third step did not predict PTSD symptom severities over and above the variables included in the first two steps. However, with the exception of peritraumatic dissociation all cognitive variables emerged as powerful predictors and increased the amount of variance explained from 43% to a total amount of 68%. Conclusions The findings suggest that the prediction of PTSD following childbirth can be improved by focusing on variables derived from a current theoretical model of the disorder. PMID:25026966
Unraveling the sequence-dependent polymorphic behavior of d(CpG) steps in B-DNA.
Dans, Pablo Daniel; Faustino, Ignacio; Battistini, Federica; Zakrzewska, Krystyna; Lavery, Richard; Orozco, Modesto
2014-10-01
We have made a detailed study of one of the most surprising sources of polymorphism in B-DNA: the high twist/low twist (HT/LT) conformational change in the d(CpG) base pair step. Using extensive computations, complemented with database analysis, we were able to characterize the twist polymorphism in the d(CpG) step in all the possible tetranucleotide environment. We found that twist polymorphism is coupled with BI/BII transitions, and, quite surprisingly, with slide polymorphism in the neighboring step. Unexpectedly, the penetration of cations into the minor groove of the d(CpG) step seems to be the key element in promoting twist transitions. The tetranucleotide environment also plays an important role in the sequence-dependent d(CpG) polymorphism. In this connection, we have detected a previously unexplored intramolecular C-H···O hydrogen bond interaction that stabilizes the low twist state when 3'-purines flank the d(CpG) step. This work explains a coupled mechanism involving several apparently uncorrelated conformational transitions that has only been partially inferred by earlier experimental or theoretical studies. Our results provide a complete description of twist polymorphism in d(CpG) steps and a detailed picture of the molecular choreography associated with this conformational change. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Lau, Sofie Rosenlund; Traulsen, Janine M
Qualitative approaches represent an important contributor to health care research. However, several researchers argue that contemporary qualitative research does not live up to its full potential. By presenting a snapshot of contemporary qualitative research in the field of social and administrative pharmacy, this study challenges contributors to the field by asking: Are we ready to accept the challenge and take qualitative research one step further? The purpose of this study was to initiate a constructive dialogue on the need for increased transparency in qualitative data analysis, including explicitly reflecting upon theoretical perspectives affecting the research process. Content analysis was used to evaluate levels of theoretical visibility and analysis transparency in selected qualitative research articles published in Research in Social and Administrative Pharmacy between January 2014 and January 2015. In 14 out of 21 assessed papers, the use of theory was found to be Seemingly Absent (lowest level of theory use), and the data analyses did not include any interpretive endeavors. Only two papers consistently applied theory throughout the entire study and clearly took the data analyses from a descriptive to an interpretive level. It was found that the aim of the majority of assessed papers was to change or modify a given practice, which however, resulted in a lack of both theoretical underpinnings and analysis transparency. This study takes the standpoint that theory and high-quality analysis go hand-in-hand. Based on the content analysis, articles that were deemed to be high in quality were explicit about the theoretical framework of their study and transparent in how they analyzed their data. It was found that theory contributed to the transparency of how the data were analyzed and interpreted. Two ways of improving contemporary qualitative research in the field of social and administrative pharmacy are discussed: engaging with social theory and establishing close collaboration with social scientists. Copyright © 2016 Elsevier Inc. All rights reserved.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Use of Intervention Mapping to Enhance Health Care Professional Practice: A Systematic Review.
Durks, Desire; Fernandez-Llimos, Fernando; Hossain, Lutfun N; Franco-Trigo, Lucia; Benrimoj, Shalom I; Sabater-Hernández, Daniel
2017-08-01
Intervention Mapping is a planning protocol for developing behavior change interventions, the first three steps of which are intended to establish the foundations and rationales of such interventions. This systematic review aimed to identify programs that used Intervention Mapping to plan changes in health care professional practice. Specifically, it provides an analysis of the information provided by the programs in the first three steps of the protocol to determine their foundations and rationales of change. A literature search was undertaken in PubMed, Scopus, SciELO, and DOAJ using "Intervention Mapping" as keyword. Key information was gathered, including theories used, determinants of practice, research methodologies, theory-based methods, and practical applications. Seventeen programs aimed at changing a range of health care practices were included. The social cognitive theory and the theory of planned behavior were the most frequently used frameworks in driving change within health care practices. Programs used a large variety of research methodologies to identify determinants of practice. Specific theory-based methods (e.g., modelling and active learning) and practical applications (e.g., health care professional training and facilitation) were reported to inform the development of practice change interventions and programs. In practice, Intervention Mapping delineates a three-step systematic, theory- and evidence-driven process for establishing the theoretical foundations and rationales underpinning change in health care professional practice. The use of Intervention Mapping can provide health care planners with useful guidelines for the theoretical development of practice change interventions and programs.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Kotaro, E-mail: s135016@stn.nagaokaut.ac.jp; Sakamoto, Moritsugu; Noda, Kohei
2016-03-28
A diffractive optical element with a three-dimensional liquid crystal (LC) alignment structure for advanced control of polarized beams was fabricated by a highly efficient one-step photoalignment method. This study is of great significance because different two-dimensional continuous and complex alignment patterns can be produced on two alignment films by simultaneously irradiating an empty glass cell composed of two unaligned photocrosslinkable polymer LC films with three-beam polarized interference beam. The polarization azimuth, ellipticity, and rotation direction of the diffracted beams from the resultant LC grating widely varied depending on the two-dimensional diffracted position and the polarization states of the incident beams.more » These polarization diffraction properties are well explained by theoretical analysis based on Jones calculus.« less
Nuclear Physics Laboratory technical progress report, November 1, 1972-- November 1, 1973
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1973-11-01
The experimental program was divided into the areas of nuclear physics (charged-particle experiments, gamma-ray experiments andd beta decay, neutron time-of-flight experiments, x-ray fluorescence analysis, other activities), intermediate enengy physics, and apparatus and facility development. The energy- loss spectrograph, rotating-beam neutron time-of-flight spectrometer, and cyclotron and the rearch done using these facilities are described. The theoretical program has concentrated on the effects of two-step processes in nuclear reactions. The trace element analysis program continued, and a neutron beam for cancer therapy is being developed. Lists of publications and personnel are also included. (RWR)
Li, Juan; Gu, Honghong; Wu, Caihong; Du, Lijuan
2014-11-28
In this study, the Cu(OAc)2- and [PdCl2(PhCN)2]-catalyzed syntheses of benzimidazoles from amidines were theoretically investigated using density functional theory calculations. For the Cu-catalyzed system, our calculations supported a four-step-pathway involving C-H activation of an arene with Cu(II) via concerted metalation-deprotonation (CMD), followed by oxidation of the Cu(II) intermediate and deprotonation of the imino group by Cu(III), and finally reductive elimination from Cu(III). In our calculations, the barriers for the CMD step and the oxidation step are the same. The results are different from the ones reported by Fu et al. in which the whole reaction mechanism includes three steps and the CMD step is rate determining. On the basis of the calculation results for the [PdCl2(PhCN)2]-catalyzed system, C-H bond breaking by CMD occurs first, followed by the rate-determining C-N bond formation and N-H deprotonation. Pd(III) species is not involved in the [PdCl2(PhCN)2]-catalyzed syntheses of benzimidazoles from amidines.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Theoretical study of gas hydrate decomposition kinetics--model development.
Windmeier, Christoph; Oellrich, Lothar R
2013-10-10
In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.
Design of c-band telecontrol transmitter local oscillator for UAV data link
NASA Astrophysics Data System (ADS)
Cao, Hui; Qu, Yu; Song, Zuxun
2018-01-01
A C-band local oscillator of an Unmanned Aerial Vehicle (UAV) data link radio frequency (RF) transmitter unit with high-stability, high-precision and lightweight was designed in this paper. Based on the highly integrated broadband phase-locked loop (PLL) chip HMC834LP6GE, the system performed fractional-N control by internal modules programming to achieve low phase noise and small frequency resolution. The simulation and testing methods were combined to optimize and select the loop filter parameters to ensure the high precision and stability of the frequency synthesis output. The theoretical analysis and engineering prototype measurement results showed that the local oscillator had stable output frequency, accurate frequency step, high spurious suppression and low phase noise, and met the design requirements. The proposed design idea and research method have theoretical guiding significance for engineering practice.
A graph theoretical perspective of a drug abuse epidemic model
NASA Astrophysics Data System (ADS)
Nyabadza, F.; Mukwembi, S.; Rodrigues, B. G.
2011-05-01
A drug use epidemic can be represented by a finite number of states and transition rules that govern the dynamics of drug use in each discrete time step. This paper investigates the spread of drug use in a community where some users are in treatment and others are not in treatment, citing South Africa as an example. In our analysis, we consider the neighbourhood prevalence of each individual, i.e., the proportion of the individual’s drug user contacts who are not in treatment amongst all of his or her contacts. We introduce parameters α∗, β∗ and γ∗, depending on the neighbourhood prevalence, which govern the spread of drug use. We examine how changes in α∗, β∗ and γ∗ affect the system dynamics. Simulations presented support the theoretical results.
The min-conflicts heuristic: Experimental and theoretical results
NASA Technical Reports Server (NTRS)
Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip
1991-01-01
This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.
Van Kesteren, Nicole M C; Kok, Gerjo; Hospers, Harm J; Schippers, Jan; De Wildt, Wencke
2006-12-01
The objective of this study was to describe the application of a systematic process-Intervention Mapping-to developing a theory- and evidence-based intervention to promote sexual health in HIV-positive men who have sex with men (MSM). Intervention Mapping provides a framework that gives program planners a systematic method for decision-making in each phase of intervention development. In Step 1, we focused on the improvement of two health-promoting behaviors: satisfactory sexual functioning and safer sexual behavior. These behaviors were then linked with selected personal and external determinants, such as attitudes and social support, to produce a set of proximal program objectives. In Step 2, theoretical methods were identified to influence the proximal program objectives and were translated into practical strategies. Although theoretical methods were derived from various theories, self-regulation theory and a cognitive model of behavior change provided the main framework for selecting the intervention methods. The main strategies chosen were bibliotherapy (i.e., the use of written material to help people solve problems or change behavior) and motivational interviewing. In Step 3, the theoretical methods and practical strategies were applied in a program that comprised a self-help guide, a motivational interviewing session and a motivational interviewing telephone call, both delivered by specialist nurses in HIV treatment centers. In Step 4, implementation was anticipated by developing a linkage group to ensure involvement of program users in the planning process and conducting additional research to understand how to implement our program better. In Step 5, program evaluation was anticipated based on the planning process from the previous Intervention Mapping steps.
Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin
2018-04-04
The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions, and that therefore a data-driven approach such as that provided by the proposed LDA-based method is to be favored.
Program Evaluation Theory and Practice: A Comprehensive Guide
ERIC Educational Resources Information Center
Mertens, Donna M.; Wilson, Amy T.
2012-01-01
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for…
Theoretical Chemistry Comes Alive: Full Partner with Experiment.
ERIC Educational Resources Information Center
Goddard, William A., III
1985-01-01
The expected thrust for theoretical chemistry in the next decade will be to combine knowledge of fundamental chemical steps/interactions with advances in chemical dynamics, irreversible statistical mechanics, and computer technology to produce simulations of chemical systems with reaction site competition. A sample simulation (using the enzyme…
A Step-by-Step Picture of Pulsed (Time-Domain) NMR.
ERIC Educational Resources Information Center
Schwartz, Leslie J.
1988-01-01
Discusses a method for teaching time pulsed NMR principals that are as simple and pictorial as possible. Uses xyz coordinate figures and presents theoretical explanations using a Fourier transformation spectrum. Assumes no previous knowledge of quantum mechanics for students. Usable for undergraduates. (MVL)
Systematic text condensation: a strategy for qualitative analysis.
Malterud, Kirsti
2012-12-01
To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies. Giorgi's psychological phenomenological analysis is the point of departure and inspiration for systematic text condensation. The basic elements of Giorgi's method and the elaboration of these in systematic text condensation are presented, followed by a detailed description of procedures for analysis according to systematic text condensation. Finally, similarities and differences compared with other frequently applied methods for qualitative analysis are identified, as the foundation of a discussion of strengths and limitations of systematic text condensation. Systematic text condensation is a descriptive and explorative method for thematic cross-case analysis of different types of qualitative data, such as interview studies, observational studies, and analysis of written texts. The method represents a pragmatic approach, although inspired by phenomenological ideas, and various theoretical frameworks can be applied. The procedure consists of the following steps: 1) total impression - from chaos to themes; 2) identifying and sorting meaning units - from themes to codes; 3) condensation - from code to meaning; 4) synthesizing - from condensation to descriptions and concepts. Similarities and differences comparing systematic text condensation with other frequently applied qualitative methods regarding thematic analysis, theoretical methodological framework, analysis procedures, and taxonomy are discussed. Systematic text condensation is a strategy for analysis developed from traditions shared by most of the methods for analysis of qualitative data. The method offers the novice researcher a process of intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour.
Lim, Kwang Soo; Baldoví, José J; Jiang, ShangDa; Koo, Bong Ho; Kang, Dong Won; Lee, Woo Ram; Koh, Eui Kwan; Gaita-Ariño, Alejandro; Coronado, Eugenio; Slota, Michael; Bogani, Lapo; Hong, Chang Seop
2017-05-01
Controlling the coordination sphere of lanthanoid complexes is a challenging critical step toward controlling their relaxation properties. Here we present the synthesis of hexacoordinated dysprosium single-molecule magnets, where tripodal ligands achieve a near-perfect octahedral coordination. We perform a complete experimental and theoretical investigation of their magnetic properties, including a full single-crystal magnetic anisotropy analysis. The combination of electrostatic and crystal-field computational tools (SIMPRE and CONDON codes) allows us to explain the static behavior of these systems in detail.
NASA Astrophysics Data System (ADS)
Rao, Lang; Cai, Bo; Yu, Xiao-Lei; Guo, Shi-Shang; Liu, Wei; Zhao, Xing-Zhong
2015-05-01
3D microelectrodes are one-step fabricated into a microfluidic droplet separator by filling conductive silver paste into PDMS microchambers. The advantages of 3D silver paste electrodes in promoting droplet sorting accuracy are systematically demonstrated by theoretical calculation, numerical simulation and experimental validation. The employment of 3D electrodes also helps to decrease the droplet sorting voltage, guaranteeing that cells encapsulated in droplets undergo chip-based sorting processes are at better metabolic status for further potential cellular assays. At last, target droplet containing single cell are selectively sorted out from others by an appropriate electric pulse. This method provides a simple and inexpensive alternative to fabricate 3D electrodes, and it is expected our 3D electrode-integrated microfluidic droplet separator platform can be widely used in single cell operation and analysis.
Bulmer, Simon; Joseph, Jonathan
2015-01-01
The European Union is facing multiple challenges. Departing from mainstream theory, this article adopts a fresh approach to understanding integration. It does so by taking two theoretical steps. The first introduces the structure–agency debate in order to make explicit the relationship between macro-structures, the institutional arrangements at European Union level and agency. The second proposes that the state of integration should be understood as the outcome of contestation between competing hegemonic projects that derive from underlying social processes and that find their primary expression in domestic politics. These two steps facilitate an analysis of the key areas of contestation in the contemporary European Union, illustrated by an exploration of the current crisis in the European Union, and open up the development of an alternative, critical, theory of integration. PMID:29708125
NASA Technical Reports Server (NTRS)
Atreya, Arvind; Agrawal, Sanjay; Sacksteder, Kurt; Baum, Howard R.
1994-01-01
This paper presents the experimental and theoretical results for expanding methane and ethylene diffusion flames in microgravity. A small porous sphere made from a low-density and low-heat-capacity insulating material was used to uniformly supply fuel at a constant rate to the expanding diffusion flame. A theoretical model which includes soot and gas radiation is formulated but only the problem pertaining to the transient expansion of the flame is solved by assuming constant pressure infinitely fast one-step ideal gas reaction and unity Lewis number. This is a first step toward quantifying the effect of soot and gas radiation on these flames. The theoretically calculated expansion rate is in good agreement with the experimental results. Both experimental and theoretical results show that as the flame radius increases, the flame expansion process becomes diffusion controlled and the flame radius grows as gamma t. Theoretical calculations also show that for a constant fuel mass injection rate a quasi-steady state is developed in the region surrounded by the flame and the mass flow rate at any location inside this region equals the mass injection rate.
Unraveling the role of protein dynamics in dihydrofolate reductase catalysis
Luk, Louis Y. P.; Javier Ruiz-Pernía, J.; Dawson, William M.; Roca, Maite; Loveridge, E. Joel; Glowacki, David R.; Harvey, Jeremy N.; Mulholland, Adrian J.; Tuñón, Iñaki; Moliner, Vicent; Allemann, Rudolf K.
2013-01-01
Protein dynamics have controversially been proposed to be at the heart of enzyme catalysis, but identification and analysis of dynamical effects in enzyme-catalyzed reactions have proved very challenging. Here, we tackle this question by comparing an enzyme with its heavy (15N, 13C, 2H substituted) counterpart, providing a subtle probe of dynamics. The crucial hydride transfer step of the reaction (the chemical step) occurs more slowly in the heavy enzyme. A combination of experimental results, quantum mechanics/molecular mechanics simulations, and theoretical analyses identify the origins of the observed differences in reactivity. The generally slightly slower reaction in the heavy enzyme reflects differences in environmental coupling to the hydride transfer step. Importantly, the barrier and contribution of quantum tunneling are not affected, indicating no significant role for “promoting motions” in driving tunneling or modulating the barrier. The chemical step is slower in the heavy enzyme because protein motions coupled to the reaction coordinate are slower. The fact that the heavy enzyme is only slightly less active than its light counterpart shows that protein dynamics have a small, but measurable, effect on the chemical reaction rate. PMID:24065822
Varieties of second modernity: the cosmopolitan turn in social and political theory and research.
Beck, Ulrich; Grande, Edgar
2010-09-01
The theme of this special issue is the necessity of a cosmopolitan turn in social and political theory. The question at the heart of this introductory chapter takes the challenge of 'methodological cosmopolitanism', already addressed in a Special Issue on Cosmopolitan Sociology in this journal (Beck and Sznaider 2006), an important step further: How can social and political theory be opened up, theoretically as well as methodologically and normatively, to a historically new, entangled Modernity which threatens its own foundations? How can it account for the fundamental fragility, the mutability of societal dynamics (of unintended side effects, domination and power), shaped by the globalization of capital and risks at the beginning of the twenty-first century? What theoretical and methodological problems arise and how can they be addressed in empirical research? In the following, we will develop this 'cosmopolitan turn' in four steps: firstly, we present the major conceptual tools for a theory of cosmopolitan modernities; secondly, we de-construct Western modernity by using examples taken from research on individualization and risk; thirdly, we address the key problem of methodological cosmopolitanism, namely the problem of defining the appropriate unit of analysis; and finally,we discuss normative questions, perspectives, and dilemmas of a theory of cosmopolitan modernities, in particular problems of political agency and prospects of political realization.
Montez, Jennifer Karas; Hummer, Robert A.; Hayward, Mark D.
2012-01-01
A vast literature has documented the inverse association between educational attainment and U.S. adult mortality risk, but given little attention to identifying the optimal functional form of the association. A theoretical explanation of the association hinges on our ability to empirically describe it. Using the 1979–1998 National Longitudinal Mortality Study for non-Hispanic white and black adults aged 25–100 years during the mortality follow-up period (N=1,008,215), we evaluated 13 functional forms across race-gender-age subgroups to determine which form(s) best captured the association. Results revealed that a functional form that includes a linear decline in mortality risk from 0–11 years of education, followed by a step-change reduction in mortality risk upon attainment of a high school diploma, at which point mortality risk resumes a linear decline but with a steeper slope than that prior to a high school diploma was generally preferred. The findings provide important clues for theoretical development of explanatory mechanisms: an explanation for the selected functional form may require integrating a credentialist perspective to explain the step-change reduction in mortality risk upon attainment of a high school diploma, with a human capital perspective to explain the linear declines before and after a high school diploma. PMID:22246797
NASA Astrophysics Data System (ADS)
Förtsch, Christian; Dorfner, Tobias; Baumgartner, Julia; Werner, Sonja; von Kotzebue, Lena; Neuhaus, Birgit J.
2018-04-01
The German National Education Standards (NES) for biology were introduced in 2005. The content part of the NES emphasizes fostering conceptual knowledge. However, there are hardly any indications of what such an instructional implementation could look like. We introduce a theoretical framework of an instructional approach to foster students' conceptual knowledge as demanded in the NES (Fostering Conceptual Knowledge) including instructional practices derived from research on single core ideas, general psychological theories, and biology-specific features of instructional quality. First, we aimed to develop a rating manual, which is based on this theoretical framework. Second, we wanted to describe current German biology instruction according to this approach and to quantitatively analyze its effectiveness. And third, we aimed to provide qualitative examples of this approach to triangulate our findings. In a first step, we developed a theoretically devised rating manual to measure Fostering Conceptual Knowledge in videotaped lessons. Data for quantitative analysis included 81 videotaped biology lessons of 28 biology teachers from different German secondary schools. Six hundred forty students completed a questionnaire on their situational interest after each lesson and an achievement test. Results from multilevel modeling showed significant positive effects of Fostering Conceptual Knowledge on students' achievement and situational interest. For qualitative analysis, we contrasted instruction of four teachers, two with high and two with low student achievement and situational interest using the qualitative method of thematic analysis. Qualitative analysis revealed five main characteristics describing Fostering Conceptual Knowledge. Therefore, implementing Fostering Conceptual Knowledge in biology instruction seems promising. Examples of how to implement Fostering Conceptual Knowledge in instruction are shown and discussed.
The instruments of higher order thinking skills
NASA Astrophysics Data System (ADS)
Ahmad, S.; Prahmana, R. C. I.; Kenedi, A. K.; Helsa, Y.; Arianil, Y.; Zainil, M.
2017-12-01
This research developed the standard of instrument for measuring the High Order Thinking Skill (HOTS) ability of PGSD students. The research method used is development research with eight steps namely theoretical studies, operational definition, designation construct, dimensions and indicators, the preparation of the lattice, the preparation of grain, an analysis of legibility and Social desirability, field trials, and data analysis. In accordance with the type of data to be obtained in this study, the research instrument using validation sheet, implementation observation, and questionnaire. The results show that the instruments are valid and feasible to be used by expert and have been tested on PGSD students with 60% of PGSD students with low categorization.
Growth habit and surface morphology of L-arginine phosphate monohydrate single crystals
NASA Astrophysics Data System (ADS)
Sangwal, K.; Veintemillas-Verdaguer, S.; Torrent-Burgués, J.
1995-10-01
The results of a study of the growth habit and the surface topography of L-arginine phosphate monohydrate (LAP) single crystals as a function of supersaturation are described and discussed. Apart from a change in the growth habit with supersaturation, it was observed that most of the as-grown faces of LAP exhibit isolated growth hillocks and macrohillocks and parallel bunched layers and that the formation of bunched layers is pronounced on faces showing macrohillocks. Observations of bunching of growth layers emitted by macrohillocks on the {100} faces revealed that, for the onset of bunching close to a macrospiral, there is a characteristic threshold distance whose value depends on the interstep distance and supersaturation, but is independent of step height. The theoretical habit of LAP deduced from PBC analysis showed that all faces exhibiting growth hillocks and macrohillocks are F faces. Analysis of the results on bunch formation revealed that growth of LAP takes place by the direct integration of growth entities at the growth steps, that the bunching is facilitated by an increasing value of the activation energy for their integration, and that the observed dependencies of threshold distance on interstep distance, supersaturation and step height are qualitatively in agreement with van der Eerden and Müller-Krumbhaar's theory of bunch formation.
Kim, Seyoung; Park, Sukyung
2012-01-10
Humans use equal push-off and heel strike work during the double support phase to minimize the mechanical work done on the center of mass (CoM) during the gait. Recently, a step-to-step transition was reported to occur over a period of time greater than that of the double support phase, which brings into question whether the energetic optimality is sensitive to the definition of the step-to-step transition. To answer this question, the ground reaction forces (GRFs) of seven normal human subjects walking at four different speeds (1.1-2.4 m/s) were measured, and the push-off and heel strike work for three differently defined step-to-step transitions were computed based on the force, work, and velocity. To examine the optimality of the work and the impulse data, a hybrid theoretical-empirical analysis is presented using a dynamic walking model that allows finite time for step-to-step transitions and incorporates the effects of gravity within this period. The changes in the work and impulse were examined parametrically across a range of speeds. The results showed that the push-off work on the CoM was well balanced by the heel strike work for all three definitions of the step-to-step transition. The impulse data were well matched by the optimal impulse predictions (R(2)>0.7) that minimized the mechanical work done on the CoM during the gait. The results suggest that the balance of push-off and heel strike energy is a consistent property arising from the overall gait dynamics, which implies an inherited oscillatory behavior of the CoM, possibly by spring-like leg mechanics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Caffier, D; Gillet, C; Heurley, L P; Bourrelly, A; Barbier, F; Naveteur, J
2017-03-01
With reference to theoretical models regarding links between emotions and actions, the present study examined whether the lateral occurrence of an emotional stimulus influences spatial and temporal parameters of gait initiation in 18 younger and 18 older healthy adults. In order to simulate road-crossing hazard for pedestrians, slides of approaching cars were used and they were presented in counterbalanced order with threatening slides from the International Affective Picture System (IAPS) and control slides of safe walking areas. Each slide was presented on the left side of the participant once the first step was initiated. The results evidenced medio-lateral shifts to the left for the first step (right foot) and to the right for the second step (left foot). These shifts were both modulated by the slide contents in such a way that the resulting distance between the screen and the foot (right or left) was larger with the IAPS and traffic slides than with the control slides. The slides did not affect the base of support, step length, step velocity and time of double support. Advancing age influenced the subjective impact of the slides and gait characteristics, but did not modulate medio-lateral shifts. The data extend evidence of fast, emotional modulation of stepping, with theoretical and applied consequences.
Small Steps towards Student-Centred Learning
ERIC Educational Resources Information Center
Jacobs, George M.; Toh-Heng, Hwee Leng
2013-01-01
Student centred learning classroom practices are contrasted with those in teacher centred learning classrooms. The discussion focuses on the theoretical underpinnings of the former, and provides nine steps and tips on how to implement student centred learning strategies, with the aim of developing the 21st century skills of self-directed and…
An Ecological Approach to Learning Dynamics
ERIC Educational Resources Information Center
Normak, Peeter; Pata, Kai; Kaipainen, Mauri
2012-01-01
New approaches to emergent learner-directed learning design can be strengthened with a theoretical framework that considers learning as a dynamic process. We propose an approach that models a learning process using a set of spatial concepts: learning space, position of a learner, niche, perspective, step, path, direction of a step and step…
NASA Technical Reports Server (NTRS)
Curfman, Howard J , Jr
1955-01-01
Through theoretical and analog results the effects of two nonlinear stability derivatives on the longitudinal motions of an aircraft have been investigated. Nonlinear functions of pitching-moment and lift coefficients with angle of attack were considered. Analog results of aircraft motions in response to step elevator deflections and to the action of the proportional control systems are presented. The occurrence of continuous hunting oscillations was predicted and demonstrated for the attitude stabilization system with proportional control for certain nonlinear pitching-moment variations and autopilot adjustments.
Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC
NASA Astrophysics Data System (ADS)
Greig, Bradley; Mesinger, Andrei
2018-05-01
We extend our MCMC sampler of 3D EoR simulations, 21CMMC, to perform parameter estimation directly on light-cones of the cosmic 21cm signal. This brings theoretical analysis one step closer to matching the expected 21-cm signal from next generation interferometers like HERA and the SKA. Using the light-cone version of 21CMMC, we quantify biases in the recovered astrophysical parameters obtained from the 21cm power spectrum when using the co-eval approximation to fit a mock 3D light-cone observation. While ignoring the light-cone effect does not bias the parameters under most assumptions, it can still underestimate their uncertainties. However, significant biases (~few - 10 σ) are possible if all of the following conditions are met: (i) foreground removal is very efficient, allowing large physical scales (k ~ 0.1 Mpc-1) to be used in the analysis; (ii) theoretical modelling is accurate to ~10 per cent in the power spectrum amplitude; and (iii) the 21cm signal evolves rapidly (i.e. the epochs of reionisation and heating overlap significantly
Yardley, Sarah J; Watts, Kate M; Pearson, Jennifer; Richardson, Jane C
2014-01-01
In this article, we explore ethical issues in qualitative secondary analysis through a comparison of the literature with practitioner and participant perspectives. To achieve this, we integrated critical narrative review findings with data from two discussion groups: qualitative researchers and research users/consumers. In the literature, we found that theoretical debate ran parallel to practical action rather than being integrated with it. We identified an important and novel theme of relationships that was emerging from the perspectives of researchers and users. Relationships were significant with respect to trust, sharing data, transparency and clarity, anonymity, permissions, and responsibility. We provide an example of practice development that we hope will prompt researchers to re-examine the issues in their own setting. Informing the research community of research practitioner and user perspectives on ethical issues in the reuse of qualitative data is the first step toward developing mechanisms to better integrate theoretical and empirical work.
Broadband polygonal invisibility cloak for visible light
Chen, Hongsheng; Zheng, Bin
2012-01-01
Invisibility cloaks have recently become a topic of considerable interest thanks to the theoretical works of transformation optics and conformal mapping. The design of the cloak involves extreme values of material properties and spatially dependent parameter tensors, which are very difficult to implement. The realization of an isolated invisibility cloak in the visible light, which is an important step towards achieving a fully movable invisibility cloak, has remained elusive. Here, we report the design and experimental demonstration of an isolated polygonal cloak for visible light. The cloak is made of several elements, whose electromagnetic parameters are designed by a linear homogeneous transformation method. Theoretical analysis shows the proposed cloak can be rendered invisible to the rays incident from all the directions. Using natural anisotropic materials, a simplified hexagonal cloak which works for six incident directions is fabricated for experimental demonstration. The performance is validated in a broadband visible spectrum. PMID:22355767
A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test
ERIC Educational Resources Information Center
Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.
2012-01-01
Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…
Stienen, Martin N; Netuka, David; Demetriades, Andreas K; Ringel, Florian; Gautschi, Oliver P; Gempt, Jens; Kuhlen, Dominique; Schaller, Karl
2016-10-01
Substantial country differences in neurosurgical training throughout Europe have recently been described, ranging from subjective rating of training quality to objective working hours per week. The aim of this study was to analyse whether these differences translate into the results of the written and oral part of the European Board Examination in Neurological Surgery (EBE-NS). Country-specific composite scores for satisfaction with quality of theoretical and practical training, as well as working hours per week, were obtained from an electronic survey distributed among European neurosurgical residents between June 2014 and March 2015. These were related to anonymous country-specific results of the EBE-NS between 2009 and 2016, using uni- and multivariate linear regression analysis. A total of n = 1025 written and n = 63 oral examination results were included. There was a significant linear relationship between the country-specific EBE-NS result in the written part and the country-specific composite score for satisfaction with quality of theoretical training [adjusted regression coefficient (RC) -3.80, 95 % confidence interval (CI) -5.43-7 -2.17, p < 0.001], but not with practical training or working time. For the oral part, there was a linear relationship between the country-specific EBE-NS result and the country-specific composite score for satisfaction with quality of practical training (RC 9.47, 95 % CI 1.47-17.47, p = 0.021), however neither with satisfaction with quality of theoretical training nor with working time. With every one-step improvement on the country-specific satisfaction score for theoretical training, the score in the EBE-NS Part 1 increased by 3.8 %. With every one-step improvement on the country-specific satisfaction score for practical training, the score in the EBE-NS Part 2 increased by 9.47 %. Improving training conditions is likely to have a direct positive influence on the knowledge level of trainees, as measured by the EBE-NS. The effect of the actual working time on the theoretical and practical knowledge of neurosurgical trainees appears to be insignificant.
Prasai, Binay; Wilson, A R; Wiley, B J; Ren, Y; Petkov, Valeri
2015-11-14
The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au(100-x)Pd(x) (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when "tuned up" against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.
Analysis of the DFP/AFCS Systems for Compensating Gravity Distortions on the 70-Meter Antenna
NASA Technical Reports Server (NTRS)
Imbriale, William A.; Hoppe, Daniel J.; Rochblatt, David
2000-01-01
This paper presents the theoretical computations showing the expected performances for both systems. The basic analysis tool is a Physical Optics reflector analysis code that was ported to a parallel computer for faster execution times. There are several steps involved in computing the RF performance of the various systems. 1 . A model of the RF distortions of the main reflector is required. This model is based upon measured holography maps of the 70-meter antenna obtained at 3 elevation angles. The holography maps are then processed (using an appropriate gravity mechanical model of the dish) to provide surface distortion maps at all elevation angles. 2. From the surface distortion maps, ray optics is used to determine the theoretical shape of the DFP that will exactly phase compensate the distortions. 3. From the theoretical shape and a NASTRAN mechanical model of the plate, the actuator positions that generate a surface that provides the best RMS fit to the theoretical model are selected. Using the actuator positions and the NASTRAN model provides an accurate description of the actual mirror shape. 4. Starting from the mechanical drawings of the feed, a computed RF feed pattern is generated. This pattern is expanded into a set of spherical wave modes so that a complete near field analysis of the reflector system can be obtained. 5. For the array feed, the excitation coefficients that provide the maximum gain are computed using a phase conjugate technique. The basic experimental geometry consisted of a dual shaped 70-meter antenna system; a refocusing ellipse, a DFP and an array feed system. To provide physical insight to the systems performance, focal plane field plots are presented at several elevations. Curves of predicted performance are shown for the DFP system, monopulse tracking system, AFCS and combined DFP/AFCS system. The calculated results show that the combined DFP/AFCS system is capable of recovering the majority of the gain lost due to gravity distortion.
Topics in the optimization of millimeter-wave mixers
NASA Technical Reports Server (NTRS)
Siegel, P. H.; Kerr, A. R.; Hwang, W.
1984-01-01
A user oriented computer program for the analysis of single-ended Schottky diode mixers is described. The program is used to compute the performance of a 140 to 220 GHz mixer and excellent agreement with measurements at 150 and 180 GHz is obtained. A sensitivity analysis indicates the importance of various diode and mount characteristics on the mixer performance. A computer program for the analysis of varactor diode multipliers is described. The diode operates in either the reverse biased varactor mode or with substantial forward current flow where the conversion mechanism is predominantly resistive. A description and analysis of a new H-plane rectangular waveguide transformer is reported. The transformer is made quickly and easily in split-block waveguide using a standard slitting saw. It is particularly suited for use in the millimeter-wave band, replacing conventional electroformed stepped transformers. A theoretical analysis of the transformer is given and good agreement is obtained with measurements made at X-band.
Hydraulic studies of drilling microbores with supercritical steam, nitrogen and carbon dioxide
Ken Oglesby
2010-01-01
Hydraulic studies of drilling microbores at various depths and with various hole sizes, tubing, fluids and rates showed theoretical feasibility. WELLFLO SIMULATIONS REPORT STEP 4: DRILLING 10,000 FT WELLS WITH SUPERCRITICAL STEAM, NITROGEN AND CARBON DIOXIDE STEP 5: DRILLING 20,000 FT WELLS WITH SUPERCRITICAL STEAM, NITROGEN AND CARBON DIOXIDE STEP 6: DRILLING 30,000 FT WELLS WITH SUPERCRITICAL STEAM, NITROGEN AND CARBON DIOXIDE Mehmet Karaaslan, MSI
NASA Astrophysics Data System (ADS)
Einstein, T. L.; Pimpinelli, Alberto
2014-06-01
Spurred by theoretical predictions from Ferrari et al. (Phys Rev E 69:035102(R),
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
Jorge-Botana, Guillermo; Olmos, Ricardo; Luzón, José M
2018-01-01
The aim of this paper is to describe and explain one useful computational methodology to model the semantic development of word representation: Word maturity. In particular, the methodology is based on the longitudinal word monitoring created by Kirylev and Landauer using latent semantic analysis for the representation of lexical units. The paper is divided into two parts. First, the steps required to model the development of the meaning of words are explained in detail. We describe the technical and theoretical aspects of each step. Second, we provide a simple example of application of this methodology with some simple tools that can be used by applied researchers. This paper can serve as a user-friendly guide for researchers interested in modeling changes in the semantic representations of words. Some current aspects of the technique and future directions are also discussed. WIREs Cogn Sci 2018, 9:e1457. doi: 10.1002/wcs.1457 This article is categorized under: Computer Science > Natural Language Processing Linguistics > Language Acquisition Psychology > Development and Aging. © 2017 Wiley Periodicals, Inc.
Direct estimation of tidally induced Earth rotation variations observed by VLBI
NASA Astrophysics Data System (ADS)
Englich, S.; Heinkelmann, R.; BOHM, J.; Schuh, H.
2009-09-01
The subject of our study is the investigation of periodical variations induced by solid Earth tides and ocean tides in Earth rotation parameters (ERP: polar motion, UT1)observed by VLBI. There are two strategies to determine the amplitudes and phases of Earth rotation variations from observations of space geodetic techniques. The common way is to derive time series of Earth rotation parameters first and to estimate amplitudes and phases in a second step. Results obtained by this means were shown in previous studies for zonal tidal variations (Englich et al.; 2008a) and variations caused by ocean tides (Englich et al.; 2008b). The alternative method is to estimate the tidal parameters directly within the VLBI data analysis procedure together with other parameters such as station coordinates, tropospheric delays, clocks etc. The purpose of this work was the application of this direct method to a combined VLBI data analysis using the software packages OCCAM (Version 6.1, Gauss-Markov-Model) and DOGSCS (Gerstl et al.; 2001). The theoretical basis and the preparatory steps for the implementation of this approach are presented here.
Further Progress Applying the Generalized Wigner Distribution to Analysis of Vicinal Surfaces
NASA Astrophysics Data System (ADS)
Einstein, T. L.; Richards, Howard L.; Cohen, S. D.
2001-03-01
Terrace width distributions (TWDs) can be well fit by the generalized Wigner distribution (GWD), generally better than by conventional Gaussians, and thus offers a convenient way to estimate the dimensionless elastic repulsion strength tildeA from σ^2, the TWD variance.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999) The GWD σ^2 accurately reproduces values for the two exactly soluble cases at small tildeA and in the asymptotic limit. Taxing numerical simulations show that the GWD σ^2 interpolates well between these limits. Extensive applications have been made to experimental data, esp. on Cu.(M. Giesen and T.L. Einstein, Surface Sci. 449), 191 (2000) Recommended analysis procedures are catalogued.(H.L. Richards, S.D. Cohen, TLE, & M. Giesen, Surf Sci 453), 59 (2000) Extensions of the GWD for multistep distributions are tested, with good agreement for second-neighbor distributions, less good for third.(TLE, HLR, SDC, & OP-L, Proc ISSI-PDSC2000, cond-mat/0012xxxxx) Alternatively, step-step correlation functions, about which there is more theoretical information, should be measured.
Structural Studies of Silver Nanoparticles Obtained Through Single-Step Green Synthesis
NASA Astrophysics Data System (ADS)
Prasad Peddi, Siva; Abdallah Sadeh, Bilal
2015-10-01
Green synthesis of silver Nanoparticles (AGNP's) has been the most prominent among the metallic nanoparticles for research for over a decade and half now due to both the simplicity of preparation and the applicability of biological species with extensive applications in medicine and biotechnology to reduce and trap the particles. The current article uses Eclipta Prostrata leaf extract as the biological species to cap the AGNP's through a single step process. The characterization data obtained was used for the analysis of the sample structure. The article emphasizes the disquisition of their shape and size of the lattice parameters and proposes a general scheme and a mathematical model for the analysis of their dependence. The data of the synthesized AGNP's has been used to advantage through the introduction of a structural shape factor for the crystalline nanoparticles. The properties of the structure of the AGNP's proposed and evaluated through a theoretical model was undeviating with the experimental consequences. This modus operandi gives scope for the structural studies of ultrafine particles prepared using biological methods.
NASA Technical Reports Server (NTRS)
Karimi, Majid
1993-01-01
Understanding surface diffusion is essential in understanding surface phenomena, such as crystal growth, thin film growth, corrosion, physisorption, and chemisorption. Because of its importance, various experimental and theoretical efforts have been directed to understand this phenomena. The Field Ion Microscope (FIM) has been the major experimental tool for studying surface diffusion. FIM have been employed by various research groups to study surface diffusion of adatoms. Because of limitations of the FIM, such studies are only limited to a few surfaces: nickel, platinum, aluminum, iridium, tungsten, and rhodium. From the theoretical standpoint, various atomistic simulations are performed to study surface diffusion. In most of these calculations the Embedded Atom Method (EAM) along with the molecular static (MS) simulation are utilized. The EAM is a semi-empirical approach for modeling the interatomic interactions. The MS simulation is a technique for minimizing the total energy of a system of particles with respect to the positions of its particles. One of the objectives of this work is to develop the EAM functions for Cu and use them in conjunction with the molecular static (MS) simulation to study diffusion of a Cu atom on a perfect as well as stepped Cu(100) surfaces. This will provide a test of the validity of the EAM functions on Cu(100) surface and near the stepped environments. In particular, we construct a terrace-ledge-kink (TLK) model and calculate the migration energies of an atom on a terrace, near a ledge site, near a kink site, and going over a descending step. We have also calculated formation energies of an atom on the bare surface, a vacancy in the surface, a stepped surface, and a stepped-kink surface. Our results are compared with the available experimental and theoretical results.
The Far Infrared Vibration-Rotation Spectrum of the Ammonia Dimer.
NASA Astrophysics Data System (ADS)
Loeser, Jennifer Gertrud
1995-11-01
The ammonia dimer has been shown to exhibit unusual weak bonding properties relative to those of the other prototypical second row systems, the hydrogen fluoride dimer and the water dimer. The ultimate goal of the work initiated in this dissertation is to determine a complete intermolecular potential energy surface for the ammonia dimer. It is first necessary to observe its far infrared vibration-rotation-tunneling (VRT) spectrum and to develop a group theoretical model that explains this spectrum in terms of the internal dynamics of the ammonia dimer. These first steps are the subject of this dissertation. First, the current understanding of the ammonia dimer system is reviewed. Group theoretical descriptions of the nature of the ammonia dimer VRT states are explained in detail. An overview of the experimental and theoretical studies of the ammonia dimer made during the last decade is presented. Second, progress on the analysis of the microwave and far infrared spectrum of (ND_3)_2 below 13 cm^{-1} is reported. These spectra directly measure the 'donor -acceptor' interchange splittings in (ND_3) _2, and determine some of the monomer umbrella inversion tunneling splittings. Third, new 80-90 cm^{-1} far infrared spectra of (NH_3)_2 are presented and a preliminary analysis is proposed. Most of the new excited VRT states have been assigned as tunneling sublevels of an out-of-plane intermolecular vibration.
Role-Playing Methods in the Classroom.
ERIC Educational Resources Information Center
Chesler, Mark; Fox, Robert
This book, one of three Teacher Resource Booklets on Classroom Social Relations and Learning developed at the Center for Research on Utilization of Scientific Knowledge at the University of Michigan, discusses the theoretical background of role playing and gives a step-by-step discussion of how to use role playing in the classroom. There are…
Development of a Global Lifelong Learning Index for Future Education
ERIC Educational Resources Information Center
Kim, JuSeuk
2016-01-01
Since the transition from industrial society to a knowledge-based society, the source of national competitiveness is also changing. In this context, lifelong education has become a new competitive strategy for countries. This study broadly consists of three steps. Step I features a theoretical review of global lifelong learning indices and a…
ERIC Educational Resources Information Center
Williams, Miriam F.
2012-01-01
The author uses game theoretical models to identify technical communication breakdowns encountered during the notoriously confusing Texas Two-Step voting and caucusing process. Specifically, the author uses narrative theory and game theory to highlight areas where caucus participants needed instructions to better understand the rules of the game…
Thermal, size and surface effects on the nonlinear pull-in of small-scale piezoelectric actuators
NASA Astrophysics Data System (ADS)
SoltanRezaee, Masoud; Ghazavi, Mohammad-Reza
2017-09-01
Electrostatically actuated miniature wires/tubes have many operational applications in the high-tech industries. In this research, the nonlinear pull-in instability of piezoelectric thermal small-scale switches subjected to Coulomb and dissipative forces is analyzed using strain gradient and modified couple stress theories. The discretized governing equation is solved numerically by means of the step-by-step linearization method. The correctness of the formulated model and solution procedure is validated through comparison with experimental and several theoretical results. Herein, the length-scale, surface energy, van der Waals attraction and nonlinear curvature are considered in the present comprehensive model and the thermo-electro-mechanical behavior of cantilever piezo-beams are discussed in detail. It is found that the piezoelectric actuation can be used as a design parameter to control the pull-in phenomenon. The obtained results are applicable in stability analysis, practical design and control of actuated miniature intelligent devices.
Richardson, Miles
2017-04-01
In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.
Study of p-diaminobenzene Adsorption on Au(111) by Scanning Tunneling Microscopy
NASA Astrophysics Data System (ADS)
Zhou, Hui; Hu, Zonghai; Eom, Daejin; Rim, Kwang; Liu, Li; Flynn, George; Venkataraman, Latha; Morgante, Alberto; Heinz, Tony
2008-03-01
From the well-defined conductivity obtained for various individual diamino-substituted molecules spanning two gold contacts, as well as from theoretical analysis [1], researchers have suggested that amines adsorb preferentially to coordinatively unsaturated surface Au atoms through the N lone pair. To understand the nature of the amine binding, we have applied ultrahigh vacuum scanning tunneling microscope (STM) to investigate the adsorption of p-diaminobenzene molecules on the reconstructed Au(111) surface. The STM topography images (taken at 4 K) show that the molecules adsorb preferentially to step edges, corresponding to sites of reduced Au atom coordination. The adsorbed molecules are found to display a distinctive orientation along the step edges. The two-lobe topographic structure of each molecule seen by STM is compatible with the previously calculated charge density of the HOMO level. [1] L. Venkataraman at el., Nano Lett. 7, 502 (2007).
Development of a booklet on insulin therapy for children with diabetes mellitus type 1.
Moura, Denizielle de Jesus Moreira; Moura, Nádya Dos Santos; Menezes, Luciana Catunda Gomes de; Barros, Ariane Alves; Guedes, Maria Vilani Cavalcante
2017-01-01
to describe the process of developing of an educational booklet on insulin therapy for children with diabetes mellitus type 1. methodological approach, in which the following steps were carried out: selecting of the content and type of technology to be developed (for this step, an integrative review, an analysis of the comments of blogs about Diabetes Mellitus type 1 and interviews with the children were performed), creation of images, formatting and layout composition. the work resulted in the production of the final version of the educational booklet, which was titled Aplicando a insulina: a aventura de Beto [Applying insulin: Beto's adventure]. The process of developing of the booklet was based on the active participation of the children and guided by the theoretical framework of Piagetian Constructivism. the resource is a facilitator for the improvement of the knowledge and practices of self care of children with Diabetes Mellitus type 1.
Segarra, Ignacio; Gomez, Manuel
2014-12-01
We developed a pharmacology practicum assignment to introduce students to the research ethics and steps involved in a clinical trial. The assignment included literature review, critical analysis of bioethical situations, writing a study protocol and presenting it before a simulated ethics committee, a practice interview with a faculty member to obtain informed consent, and a student reflective assessment and self-evaluation. Students were assessed at various steps in the practicum; the learning efficiency of the activity was evaluated using an independent survey as well as students' reflective feedback. Most of the domains of Bloom's and Fink's taxonomies of learning were itemized and covered in the practicum. Students highly valued the translatability of theoretical concepts into practice as well as the approach to mimic professional practice. This activity was within a pharmacy program, but may be easily transferable to other medical or health sciences courses. © The Author(s) 2014.
In-plane free vibration analysis of cable arch structure
NASA Astrophysics Data System (ADS)
Zhao, Yueyu; Kang, Houjun
2008-05-01
Cable-stayed arch bridge is a new type of composite bridge, which utilizes the mechanical characters of cable and arch. Based on the supporting members of cable-stayed arch bridge and of erection of arch bridge using of the cantilever construction method with tiebacks, we propose a novel mechanical model of cable-arch structure. In this model, the equations governing vibrations of the cable-arch are derived according to Hamilton's principle for dynamic problems in elastic body under equilibrium state. Then, the program of solving the dynamic governing equations is ultimately established by the transfer matrix method for free vibration of uniform and variable cross-section, and the internal characteristics of the cable-arch are investigated. After analyzing step by step, the research results approve that the program is accurate; meanwhile, the mechanical model and method are both valuable and significant not only in theoretical research and calculation but also in design of engineering.
Zuend, Stephan J; Jacobsen, Eric N
2007-12-26
The mechanism of the enantioselective cyanosilylation of ketones catalyzed by tertiary amino-thiourea derivatives was investigated using a combination of experimental and theoretical methods. The kinetic analysis is consistent with a cooperative mechanism in which both the thiourea and the tertiary amine of the catalyst are involved productively in the rate-limiting cyanide addition step. Density functional theory calculations were used to distinguish between mechanisms involving thiourea activation of ketone or of cyanide in the enantioselectivity-determining step. The strong correlation obtained between experimental and calculated ee's for a range of substrates and catalysts provides support for the most favorable calculated transition structures involving amine-bound HCN adding to thiourea-bound ketone. The calculations suggest that enantioselectivity arises from direct interactions between the ketone substrate and the amino-acid derived portion of the catalyst. On the basis of this insight, more enantioselective catalysts with broader substrate scope were prepared and evaluated experimentally.
ERIC Educational Resources Information Center
Andronaco, Julie A.; Shute, Rosalyn; McLachlan, Angus
2014-01-01
Asynchrony is a theoretical construct that views the intellectually gifted child as inherently vulnerable because of disparities arising from the mismatch between his or her chronological age and mental age. Such disparities, for example, between wanting to belong but being intellectually out of step with peers, are said to give rise to a…
D. Todd Jones-Farrand; Todd M. Fearer; Wayne E. Thogmartin; Frank R. Thompson; Mark D. Nelson; John M. Tirpak
2011-01-01
Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and...
Sneck, Sami; Saarnio, Reetta; Isola, Arja; Boigu, Risto
2016-01-01
Medication administration is an important task of registered nurses. According to previous studies, nurses lack theoretical knowledge and drug calculation skills and knowledge-based mistakes do occur in clinical practice. Finnish health care organizations started to develop a systematic verification processes for medication competence at the end of the last decade. No studies have yet been made of nurses' theoretical knowledge and drug calculation skills according to these online exams. The aim of this study was to describe the medication competence of Finnish nurses according to theoretical and drug calculation exams. A descriptive correlation design was adopted. Participants and settings All nurses who participated in the online exam in three Finnish hospitals between 1.1.2009 and 31.05.2014 were selected to the study (n=2479). Quantitative methods like Pearson's chi-squared tests, analysis of variance (ANOVA) with post hoc Tukey tests and Pearson's correlation coefficient were used to test the existence of relationships between dependent and independent variables. The majority of nurses mastered the theoretical knowledge needed in medication administration, but 5% of the nurses struggled with passing the drug calculation exam. Theoretical knowledge and drug calculation skills were better in acute care units than in the other units and younger nurses achieved better results in both exams than their older colleagues. The differences found in this study were statistically significant, but not high. Nevertheless, even the tiniest deficiency in theoretical knowledge and drug calculation skills should be focused on. It is important to identify the nurses who struggle in the exams and to plan targeted educational interventions for supporting them. The next step is to study if verification of medication competence has an effect on patient safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Some considerations in the combustion of AP/composite propellants
NASA Technical Reports Server (NTRS)
Kumar, R. N.
1972-01-01
Theoretical studies are presented on the time-independent and oscillatory combustion of nonmetallized AP/composite propellants. Three hypotheses are introduced: (1) The extent of propellant degradation at the vaporization step has to be specified through a scientific criterion. (2) The condensed phase degradation reaction of ammonium perchlorate to a vaporizable state is the overall rate-limiting step. (3) Gas phase combustion rate is controlled by the mixing rate of fuel and oxidizer vapors. In the treatment of oscillatory combustion, the assumption of quasi-steady fluctuations in the gas phase is used to supplement these hypotheses. In comparison with experimental data, this study predicts several of the observations including a few that remain inconsistent with theoretical results.
A calibration method of infrared LVF based spectroradiometer
NASA Astrophysics Data System (ADS)
Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin
2017-10-01
In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.
NASA Astrophysics Data System (ADS)
Kolesnikov, I. E.; lvanova, T. Yu.; Ivanov, D. A.; Kireev, A. A.; Mamonova, D. V.; Golyeva, E. V.; Mikhailov, M. D.; Manshina, A. A.
2018-02-01
Associated luminescence/plasmonic nanoparticles were prepared in a single step process as a result of laser illumination (low intensity CW He-Cd laser) of colloidal solution of YVO4:Eu3+@SiO2 mixed with heterometallic supramolecular complex. The results of SEM-EDX analysis, absorption, steady-state luminescence and luminescence decay measurements revealed formation of associated nanohybrids with core/shell morphology. The obtained nanostructures demonstrated metal enhanced luminescence with enhancement factor of 1.6. The theoretical calculations revealed strong correlation of enhancement factor and plasmonic nanoparticles number.
Liquid-Crystal Point-Diffraction Interferometer for Wave-Front Measurements
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Creath, Katherine
1996-01-01
A new instrument, the liquid-crystal point-diffraction interferometer (LCPDI), is developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point-diffraction interferometer and adds to it a phase-stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wave fronts with very high data density and with automated data reduction. We describe the theory and design of the LCPDI. A focus shift was measured with the LCPDI, and the results are compared with theoretical results,
Depicting the logic of three evaluation theories.
Hansen, Mark; Alkin, Marvin C; Wallace, Tanner Lebaron
2013-06-01
Here, we describe the development of logic models depicting three theories of evaluation practice: Practical Participatory (Cousins & Whitmore, 1998), Values-engaged (Greene, 2005a, 2005b), and Emergent Realist (Mark et al., 1998). We begin with a discussion of evaluation theory and the particular theories that were chosen for our analysis. We then outline the steps involved in constructing the models. The theoretical prescriptions and claims represented here follow a logic model template developed at the University Wisconsin-Extension (Taylor-Powell & Henert, 2008), which also closely aligns with Mark's (2008) framework for research on evaluation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Energy Efficiency of a New Trifocal Intraocular Lens
NASA Astrophysics Data System (ADS)
Vega, F.; Alba-Bueno, F.; Millán, M. S.
2014-01-01
The light distribution among the far, intermediate and near foci of a new trifocal intraocular lens (IOL) is experimentally determined, as a function of the pupil size, from image analysis. The concept of focus energy efficiency is introduced because, in addition to the theoretical diffraction efficiency of the focus, it accounts for other factors that are naturally presented in the human eye such as the level of spherical aberration (SA) upon the IOL, light scattering at the diffractive steps or the depth of focus. The trifocal IOL is tested in-vitro in two eye models: the aberration-free ISO model, and a so called modified-ISO one that uses an artificial cornea with positive spherical SA in instead. The SA upon the IOL is measured with a Hartmann-Shack sensor and compared to the values of theoretical eye models. The results show, for large pupils, a notorious reduction of the energy efficiency of the far and near foci of the trifocal IOL due to two facts: the level of SA upon the IOL is larger than the value the lens is able to compensate for and there is significant light scattering at the diffractive steps. On the other hand, the energy efficiency of the intermediate focus for small pupils is enhanced by the contribution of the extended depth of focus of the near and far foci. Thus, while IOLs manufacturers tend to provide just the theoretical diffraction efficiency of the foci to show which would be the performance of the lens in terms of light distribution among the foci, our results put into evidence that this is better described by using the energy efficiency of the foci.
NASA Astrophysics Data System (ADS)
Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang
2017-05-01
This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.
NASA Astrophysics Data System (ADS)
Bratchikov, A. N.; Glukhov, I. P.
1992-02-01
An analysis is made of a theoretical model of an interference fiber channel for transmission of microwave signals. It is assumed that the channel consists of a multimode fiber waveguide with a step or graded refractive-index profile. A typical statistic of a longitudinal distribution of inhomogeneities is also assumed. Calculations are reported of the interference losses, the spectral profile of the output radio signal, the signal/noise ratio in the channel, and of the dependences of these parameters on: the type, diameter, and the length of the multimode fiber waveguide; the spectral width of the radiation source; the frequency offset between the interfering optical signals.
NASA Astrophysics Data System (ADS)
Lemanzyk, Thomas; Anding, Katharina; Linss, Gerhard; Rodriguez Hernández, Jorge; Theska, René
2015-02-01
The following paper deals with the classification of seeds and seed components of the South-American Incanut plant and the modification of a machine to handle this task. Initially the state of the art is being illustrated. The research was executed in Germany and with a relevant part in Peru and Ecuador. Theoretical considerations for the solution of an automatically analysis of the Incanut seeds were specified. The optimization of the analyzing software and the separation unit of the mechanical hardware are carried out with recognition results. In a final step the practical application of the analysis of the Incanut seeds is held on a trial basis and rated on the bases of statistic values.
Generalized theory for seaplane impact
NASA Technical Reports Server (NTRS)
Milwitzky, Benjamin
1952-01-01
The motions, hydrodynamic loads, and pitching moments experienced by v-bottom seaplanes during step-landing impacts are analyzed and the theoretical results are compared with experimental data. In the analysis, the primary flow about the immersed portion of a keeled hull or float is considered to occur in transverse flow planes and the concept of virtual mass is applied to determined the reaction of the water to the motions of the seaplane. The entire immersion process is analyzed from the instant of initial contact until the seaplane rebounds from the water surfaces. The analysis is applicable to the complete range of initial contact conditions between the case of impacts where the resultant velocity is normal to the keel and the limiting condition of planing.
Inertial drives for micro- and nanorobots: two novel mechanisms
NASA Astrophysics Data System (ADS)
Zesch, Wolfgang; Buechi, Roland; Codourey, Alain; Siegwart, Roland Y.
1995-12-01
In micro or nanorobotics, high precision movement in two or more degrees of freedom is one of the main problems. Firstly, the positional precision has to be increased (< 10 nm) as the object sizes decrease. On the other hand, the workspace has to have macroscopic dimensions (1 cm3) to give high maneuverability to the system and to allow suitable handling at the micro/macro-world interface. As basic driving mechanisms for the ETHZ Nanorobot Project, two new piezoelectric devices have been developed. `Abalone' is a 3-dof system that relies on the impact drive principle. The 38 mm X 33 mm X 9 mm slider can be moved to each position and orientation in a horizontal plane within a theoretically infinite workspace. In the stepping mode it achieves a speed of 1 mm/s in translation and 7 deg/s in rotation. Within the actuator's local range of 6 micrometers fine positioning is possible with a resolution better than 10 nm. `NanoCrab' is a bearingless rotational micromotor relying on the stick-slip effect. This 10 mm X 7 mm X 7 mm motor has the advantage of a relatively high torque at low rotational speed and an excellent runout. While the maximum velocity is 60 rpm, it reaches its highest torque of 0.3 mNm at 2 rpm. Another benefit is the powerless holding torque of 0.9 mNm. With a typical step of 0.1 mrad and a local resolution 3 orders of magnitude better than the step angle, NanoCrab can be very precisely adjusted. Design and measurements of the characteristics of these two mechanisms will be presented and compared with the theoretical analysis of inertial drives presented in a companion paper. Finally their integration into the Nanorobot system will be discussed.
Wang, Ye; Huang, Zhi Xiang; Shi, Yumeng; Wong, Jen It; Ding, Meng; Yang, Hui Ying
2015-01-01
Transition metal cobalt (Co) nanoparticle was designed as catalyst to promote the conversion reaction of Sn to SnO2 during the delithiation process which is deemed as an irreversible reaction. The designed nanocomposite, named as SnO2/Co3O4/reduced-graphene-oxide (rGO), was synthesized by a simple two-step method composed of hydrothermal (1st step) and solvothermal (2nd step) synthesis processes. Compared to the pristine SnO2/rGO and SnO2/Co3O4 electrodes, SnO2/Co3O4/rGO nanocomposites exhibit significantly enhanced electrochemical performance as the anode material of lithium-ion batteries (LIBs). The SnO2/Co3O4/rGO nanocomposites can deliver high specific capacities of 1038 and 712 mAh g−1 at the current densities of 100 and 1000 mA g−1, respectively. In addition, the SnO2/Co3O4/rGO nanocomposites also exhibit 641 mAh g−1 at a high current density of 1000 mA g−1 after 900 cycles, indicating an ultra-long cycling stability under high current density. Through ex-situ TEM analysis, the excellent electrochemical performance was attributed to the catalytic effect of Co nanoparticles to promote the conversion of Sn to SnO2 and the decomposition of Li2O during the delithiation process. Based on the results, herein we propose a new method in employing the catalyst to increase the capacity of alloying-dealloying type anode material to beyond its theoretical value and enhance the electrochemical performance. PMID:25776280
Gu, Di; Shao, Nan; Zhu, Yanji; Wu, Hongjun; Wang, Baohui
2017-01-05
The STEP concept has successfully been demonstrated for driving chemical reaction by utilization of solar heat and electricity to minimize the fossil energy, meanwhile, maximize the rate of thermo- and electrochemical reactions in thermodynamics and kinetics. This pioneering investigation experimentally exhibit that the STEP concept is adapted and adopted efficiently for degradation of nitrobenzene. By employing the theoretical calculation and thermo-dependent cyclic voltammetry, the degradation potential of nitrobenzene was found to be decreased obviously, at the same time, with greatly lifting the current, while the temperature was increased. Compared with the conventional electrochemical methods, high efficiency and fast degradation rate were markedly displayed due to the co-action of thermo- and electrochemical effects and the switch of the indirect electrochemical oxidation to the direct one for oxidation of nitrobenzene. A clear conclusion on the mechanism of nitrobenzene degradation by the STEP can be schematically proposed and discussed by the combination of thermo- and electrochemistry based the analysis of the HPLC, UV-vis and degradation data. This theory and experiment provide a pilot for the treatment of nitrobenzene wastewater with high efficiency, clean operation and low carbon footprint, without any other input of energy and chemicals from solar energy. Copyright © 2016 Elsevier B.V. All rights reserved.
CO activation pathways and the mechanism of Fischer–Tropsch synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda, Manuel; Nabar, Rahul P.; Nilekar, Anand U.
2010-06-15
Unresolved mechanistic details of monomer formation in Fischer–Tropsch synthesis (FTS) and of its oxygen rejection routes are addressed here by combining kinetic and theoretical analyses of elementary steps on representative Fe and Co surfaces saturated with chemisorbed CO. These studies provide experimental and theoretical evidence for hydrogen-assisted CO activation as the predominant kinetically-relevant step on Fe and Co catalysts at conditions typical of FTS practice. H2 and CO kinetic effects on FTS rates and oxygen rejection selectivity (as H2O or CO2) and density functional theory estimates of activation barriers and binding energies are consistent with H-assisted CO dissociation, but notmore » with the previously accepted kinetic relevance of direct CO dissociation and chemisorbed carbon hydrogenation elementary steps. H-assisted CO dissociation removes O-atoms as H2O, while direct dissociation forms chemisorbed oxygen atoms that desorb as CO2. Direct CO dissociation routes are minor contributors to monomer formation on Fe and may become favored at high temperatures on alkali-promoted catalysts, but not on Co catalysts, which remove oxygen predominantly as H2O because of the preponderance of Hassisted CO dissociation routes. The merging of experiment and theory led to the clarification of persistent mechanistic issues previously unresolved by separate experimental and theoretical inquiries.« less
Using Movement to Teach Academics: The Mind and Body as One Entity
ERIC Educational Resources Information Center
Minton, Sandra
2008-01-01
This book is developed to help teach curriculum through the use of movement and dance, while giving students a chance to use their creative problem-solving skills. The text describes a step-by-step process through which instructor and students can learn to transform academic concepts into actions and dances. Theoretical information is also…
Mechanical Design Handbook for Elastomers
NASA Technical Reports Server (NTRS)
Darlow, M.; Zorzi, E.
1986-01-01
Mechanical Design Handbook for Elastomers reviews state of art in elastomer-damper technology with particular emphasis on applications of highspeed rotor dampers. Self-contained reference but includes some theoretical discussion to help reader understand how and why dampers used for rotating machines. Handbook presents step-by-step procedure for design of elastomer dampers and detailed examples of actual elastomer damper applications.
NASA Technical Reports Server (NTRS)
Darlow, M.; Zorzi, E.
1981-01-01
A comprehensive guide for the design of elastomer dampers for application in rotating machinery is presented. Theoretical discussions, a step by step procedure for the design of elastomer dampers, and detailed examples of actual elastomer damper applications are included. Dynamic and general physical properties of elastomers are discussed along with measurement techniques.
Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment
ERIC Educational Resources Information Center
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…
González-Calabozo, Jose M; Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen
2016-09-15
Gene Expression Data (GED) analysis poses a great challenge to the scientific community that can be framed into the Knowledge Discovery in Databases (KDD) and Data Mining (DM) paradigm. Biclustering has emerged as the machine learning method of choice to solve this task, but its unsupervised nature makes result assessment problematic. This is often addressed by means of Gene Set Enrichment Analysis (GSEA). We put forward a framework in which GED analysis is understood as an Exploratory Data Analysis (EDA) process where we provide support for continuous human interaction with data aiming at improving the step of hypothesis abduction and assessment. We focus on the adaptation to human cognition of data interpretation and visualization of the output of EDA. First, we give a proper theoretical background to bi-clustering using Lattice Theory and provide a set of analysis tools revolving around [Formula: see text]-Formal Concept Analysis ([Formula: see text]-FCA), a lattice-theoretic unsupervised learning technique for real-valued matrices. By using different kinds of cost structures to quantify expression we obtain different sequences of hierarchical bi-clusterings for gene under- and over-expression using thresholds. Consequently, we provide a method with interleaved analysis steps and visualization devices so that the sequences of lattices for a particular experiment summarize the researcher's vision of the data. This also allows us to define measures of persistence and robustness of biclusters to assess them. Second, the resulting biclusters are used to index external omics databases-for instance, Gene Ontology (GO)-thus offering a new way of accessing publicly available resources. This provides different flavors of gene set enrichment against which to assess the biclusters, by obtaining their p-values according to the terminology of those resources. We illustrate the exploration procedure on a real data example confirming results previously published. The GED analysis problem gets transformed into the exploration of a sequence of lattices enabling the visualization of the hierarchical structure of the biclusters with a certain degree of granularity. The ability of FCA-based bi-clustering methods to index external databases such as GO allows us to obtain a quality measure of the biclusters, to observe the evolution of a gene throughout the different biclusters it appears in, to look for relevant biclusters-by observing their genes and what their persistence is-to infer, for instance, hypotheses on their function.
Improved p-type conductivity in Al-rich AlGaN using multidimensional Mg-doped superlattices
Zheng, T. C.; Lin, W.; Liu, R.; Cai, D. J.; Li, J. C.; Li, S. P.; Kang, J. Y.
2016-01-01
A novel multidimensional Mg-doped superlattice (SL) is proposed to enhance vertical hole conductivity in conventional Mg-doped AlGaN SL which generally suffers from large potential barrier for holes. Electronic structure calculations within the first-principle theoretical framework indicate that the densities of states (DOS) of the valence band nearby the Fermi level are more delocalized along the c-axis than that in conventional SL, and the potential barrier significantly decreases. Hole concentration is greatly enhanced in the barrier of multidimensional SL. Detailed comparisons of partial charges and decomposed DOS reveal that the improvement of vertical conductance may be ascribed to the stronger pz hybridization between Mg and N. Based on the theoretical analysis, highly conductive p-type multidimensional Al0.63Ga0.37N/Al0.51Ga0.49N SLs are grown with identified steps via metalorganic vapor-phase epitaxy. The hole concentration reaches up to 3.5 × 1018 cm−3, while the corresponding resistivity reduces to 0.7 Ω cm at room temperature, which is tens times improvement in conductivity compared with that of conventional SLs. High hole concentration can be maintained even at 100 K. High p-type conductivity in Al-rich structural material is an important step for the future design of superior AlGaN-based deep ultraviolet devices. PMID:26906334
Development of a College Transition and Support Program for Students with Autism Spectrum Disorder.
White, Susan W; Elias, Rebecca; Capriola-Hall, Nicole N; Smith, Isaac C; Conner, Caitlin M; Asselin, Susan B; Howlin, Patricia; Getzel, Elizabeth E; Mazefsky, Carla A
2017-10-01
Empirically based, consumer-informed programming to support students with Autism Spectrum Disorder (ASD) transitioning to college is needed. Informed by theory and research, the Stepped Transition in Education Program for Students with ASD (STEPS) was developed to address this need. The first level (Step 1) supports high school students and the second level (Step 2) is for postsecondary students with ASD. Herein, we review the extant research on transition supports for emerging adults with ASD and describe the development of STEPS, including its theoretical basis and how it was informed by consumer input. The impact of STEPS on promotion of successful transition into college and positive outcomes for students during higher education is currently being evaluated in a randomized controlled trial.
Maxwell, Annette E; Bastani, Roshan; Glenn, Beth A; Taylor, Victoria M; Nguyen, Tung T; Stewart, Susan L; Burke, Nancy J; Chen, Moon S
2014-05-01
Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors.
Theoretically Founded Optimization of Auctioneer's Revenues in Expanding Auctions
NASA Astrophysics Data System (ADS)
Rabin, Jonathan; Shehory, Onn
The expanding auction is a multi-unit auction which provides the auctioneer with control over the outcome of the auction by means of dynamically adding items for sale. Previous research on the expanding auction has provided a numeric method to calculate a strategy that optimizes the auctioneer's revenue. In this paper, we analyze various theoretical properties of the expanding auction, and compare it to VCG, a multi-unit auction protocol known in the art. We examine the effects of errors in the auctioneer's estimation of the buyers' maximal bidding values and prove a theoretical bound on the ratio between the revenue yielded by the Informed Decision Strategy (IDS) and the post-optimal strategy. We also analyze the relationship between the auction step and the optimal revenue and introduce a method of computing this optimizing step. We further compare the revenues yielded by the use of IDS with an expanding auction to those of the VCG mechanism and determine the conditions under which the former outperforms the latter. Our work provides new insight into the properties of the expanding auction. It further provides theoretically founded means for optimizing the revenue of auctioneer.
Kazemian, Mohammad Amin; Habibi-Khorassani, Sayyed Mostafa; Maghsoodlu, Malek Taher; Ebrahimi, Ali
2014-04-01
In the present work, the proposed multiple-mechanisms have been investigated theoretically for the reaction between triphenyl phosphite and dimethyl acetylenedicarboxylate in the presence of N-H acid such as aniline for generation of phosphonate esters using ab initio molecular orbital theory in gas phase. The profile of the potential energy surface was constructed at the HF/6-311G(d,p) level of theory. The kinetics of the gas phase reaction was studied by evaluating the reaction path of various mechanisms. Between 12 speculative proposed mechanisms {step₁, step₂ (with four possibilities), step₃ (with three possibilities), and step₄} only the third speculative mechanism was recognized as a desirable mechanism. Theoretical kinetics data involving k and E(a), activation (ΔG(‡), ΔS(‡) and ΔH(‡)), and thermodynamic parameters (ΔG°, ΔS° and ΔH°) were calculated for each step of the various mechanisms. Step₁ of the desirable mechanism was identified as the rate determining step. Comparison of the theoretical desirable mechanism with the rate law that has been previously obtained by UV spectrophotometry experiments indicated that the results are in good agreement.
[Theoretical and practical considerations in rational polytherapy for epilepsy].
Rajna, Péter
2011-11-30
Author analyses the consideration of rational polytherapy for epilepsy. Among the theoretical aspects he points the different effect of seizure inhibitory drugs on the epilepsy models but didn't find data enough for the basis of any successful combination. Combinations of compounds having different way of action are more promising. Rational polytherapy can serve also the epileptic patients' tailored therapy in the daily routine. There have already been some proved synergisms concerning drug interactions. Based on detailed analysis of side effects a possibility occurs for neutralization of side effects when anticonvulsants with side effects of opposite nature are combined. Considering both the side effect profiles and the different (somatic and psychic) habits of the patients we can create a special list of favourable combinations. Co-morbid states and their treatments play a significant role in the application of rational polytherapy. Combination of anticonvulsants of lower potential but without drug-interactions can be the choice in these cases. The non-epileptic indications of the anticonvulsants can also be utilized in polymorbid patients. Based on the theoretical and practical considerations the author defines the ten-step-cognitive-preparation-process in planning the optimal (poly)therapy. On speculative basis he suggests eight beneficial versions of seizure inhibitory rational polytherapy.
NASA Astrophysics Data System (ADS)
Jones, D. B.; Limão-Vieira, P.; Mendes, M.; Jones, N. C.; Hoffmann, S. V.; da Costa, R. F.; Varella, M. T. do N.; Bettega, M. H. F.; Blanco, F.; García, G.; Ingólfsson, O.; Lima, M. A. P.; Brunger, M. J.
2017-05-01
We report on a combination of experimental and theoretical investigations into the structure of electronically excited para-benzoquinone (pBQ). Here synchrotron photoabsorption measurements are reported over the 4.0-10.8 eV range. The higher resolution obtained reveals previously unresolved pBQ spectral features. Time-dependent density functional theory calculations are used to interpret the spectrum and resolve discrepancies relating to the interpretation of the Rydberg progressions. Electron-impact energy loss experiments are also reported. These are combined with elastic electron scattering cross section calculations performed within the framework of the independent atom model-screening corrected additivity rule plus interference (IAM-SCAR + I) method to derive differential cross sections for electronic excitation of key spectral bands. A generalized oscillator strength analysis is also performed, with the obtained results demonstrating that a cohesive and reliable quantum chemical structure and cross section framework has been established. Within this context, we also discuss some issues associated with the development of a minimal orbital basis for the single configuration interaction strategy to be used for our high-level low-energy electron scattering calculations that will be carried out as a subsequent step in this joint experimental and theoretical investigation.
On the shape of martian dust and water ice aerosols
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.; Clancy, R. T.; Clayton, G. C.
2000-10-01
Researchers have often calculated radiative properties of Martian aerosols using either Mie theory for homogeneous spheres or semi-empirical theories. Given that these atmospheric particles are randomly oriented, this approach seems fairly reasonable. However, the idea that randomly oriented nonspherical particles have scattering properties equivalent to even a select subset of spheres is demonstratably false} (Bohren and Huffman 1983; Bohren and Koh 1985, Appl. Optics, 24, 1023). Fortunately, recent computational developments now enable us to directly compute scattering properties for nonspherical particles. We have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm to constrain the shape of water ice and dust aerosols. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component (starting with some ``guess'') from radiative transfer models of MGS Thermal Emission Spectrometer Emission Phase Function (EPF) sequences (for details on this step, see Clancy et al., DPS 2000). Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. In this presentation, we provide details on the second step in our analysis, including the derived phase functions (for several characteristic EPF sequences) as well as the particle properties of the best-fit theoretical models. We provide a sensitivity analysis for the EPF model-data comparisons in terms of perturbations in the particle properties (i.e., range of axial ratios, sizes, refractive indices, etc). This work is supported through NASA grant NAGS-9820 (MJW) and JPL contract no. 961471 (RTC).
A novel thermal acoustic device based on porous graphene
NASA Astrophysics Data System (ADS)
Tao, Lu-Qi; Liu, Ying; Tian, He; Ju, Zhen-Yi; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A thermal acoustic (TA) device was fabricated by laser scribing technology. Polyimide (PI) can be converted into patterned porous graphene (PG) by laser's irradiation in one step. The sound pressure level (SPL) of such TA device is related to laser power. The theoretical model of TA effect was established to analyze the relationship between the SPL and laser power. The theoretical results are in good agreement with experiment results. It was found that PG has a flat frequency response in the range of 5-20 kHz. This novel TA device has the advantages of one-step procedure, high flexibility, no mechanical vibration, low cost and so on. It can open wide applications in speakers, multimedia, medical, earphones, consumer electronics and many other aspects.
Theoretical study on the ring-opening hydrolysis reactions of N-alkylmaleimide dimers
NASA Astrophysics Data System (ADS)
Tan, Xue-Jie; Wang, Chao; Guo, Xian-Kun
2018-01-01
On the basis of our previous experimental results, the ring-opening hydrolysis reaction mechanisms of two kinds of N-alkylmaleimide dimers without or with the assistance of one and two water molecules have been theoretically investigated in detail. All possible geometries were optimized using the B3LYP/6-311+G(d,p)//B3LYP/6-31+G(d) method in the gas phase and ethanol solution. Calculated results show that every pathway is a four-step hydrolytic degradation process and every step could occur in a concerted way, instead of previously suggested asynchronous stepwise mechanism. Extra H2O or ethanol could act as carriers of proton. The results are consistent with experimental observations.
Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz
2016-09-01
In this work a new metric of surrogacy, the so-called individual causal association (ICA), is introduced using information-theoretic concepts and a causal inference model for a binary surrogate and true endpoint. The ICA has a simple and appealing interpretation in terms of uncertainty reduction and, in some scenarios, it seems to provide a more coherent assessment of the validity of a surrogate than existing measures. The identifiability issues are tackled using a two-step procedure. In the first step, the region of the parametric space of the distribution of the potential outcomes, compatible with the data at hand, is geometrically characterized. Further, in a second step, a Monte Carlo approach is proposed to study the behavior of the ICA on the previous region. The method is illustrated using data from the Collaborative Initial Glaucoma Treatment Study. A newly developed and user-friendly R package Surrogate is provided to carry out the evaluation exercise. © 2016, The International Biometric Society.
Theoretical study on the nitration of methane by acyl nitrate catalyzed by H-ZSM5 zeolite.
Silva, Alexander Martins; Nascimento, Marco Antonio Chaer
2008-09-25
A theoretical study on the nitration of methane by acyl nitrate catalyzed by HZSM-5 zeolite is reported. The zeolite was represented by a "double ring" 20T cluster. The calculations were performed at the DFT/X3LYP/6-31G** and MP2/6-31G** levels. The first step of the mechanism involves the protonation of the acyl nitrate by the zeolite and the formation of a nitronium-like ion. The reaction proceeds through a concerted step with the attack of the methane molecule by the nitronium-like ion and the simultaneous transfer of a proton from the methane molecule to the zeolite, thus reconstructing the acidic site. The activation energies for the first and second steps of this reaction are, respectively, 14.09 and 10.14 kcal/mol at X3LYP/6-31G** level and 16.68 and 13.85 kcal/mol at the MP2/6-31G**.
ERIC Educational Resources Information Center
Petrasek, Al, Jr.
This guide describes the standard operating job procedures for the tertiary chemical treatment - lime precipitation process of wastewater treatment plants. Step-by-step instructions are given for pre-start up, start-up, continuous operation, and shut-down procedures. In addition, some theoretical material is presented along with some relevant…
ERIC Educational Resources Information Center
Deal, Gerald A.; Montgomery, James A.
This guide describes standard operating job procedures for the grit removal process of wastewater treatment plants. Step-by-step instructions are given for pre-start up inspection, start-up, continuous operation, and shut-down procedures. A description of the equipment used in the process is given. Some theoretical material is presented. (BB)
Van den Bulcke, Marc; Lievens, Antoon; Barbau-Piednoir, Elodie; MbongoloMbella, Guillaume; Roosens, Nancy; Sneyers, Myriam; Casi, Amaya Leunda
2010-03-01
The detection of genetically modified (GM) materials in food and feed products is a complex multi-step analytical process invoking screening, identification, and often quantification of the genetically modified organisms (GMO) present in a sample. "Combinatory qPCR SYBRGreen screening" (CoSYPS) is a matrix-based approach for determining the presence of GM plant materials in products. The CoSYPS decision-support system (DSS) interprets the analytical results of SYBRGREEN qPCR analysis based on four values: the C(t)- and T(m) values and the LOD and LOQ for each method. A theoretical explanation of the different concepts applied in CoSYPS analysis is given (GMO Universe, "Prime number tracing", matrix/combinatory approach) and documented using the RoundUp Ready soy GTS40-3-2 as an example. By applying a limited set of SYBRGREEN qPCR methods and through application of a newly developed "prime number"-based algorithm, the nature of subsets of corresponding GMO in a sample can be determined. Together, these analyses provide guidance for semi-quantitative estimation of GMO presence in a food and feed product.
McParland, Joanna L; Williams, Lynn; Gozdzielewska, Lucyna; Young, Mairi; Smith, Fraser; MacDonald, Jennifer; Langdridge, Darren; Davis, Mark; Price, Lesley; Flowers, Paul
2018-05-27
Changing public awareness of antimicrobial resistance (AMR) represents a global public health priority. A systematic review of interventions that targeted public AMR awareness and associated behaviour was previously conducted. Here, we focus on identifying the active content of these interventions and explore potential mechanisms of action. The project took a novel approach to intervention mapping utilizing the following steps: (1) an exploration of explicit and tacit theory and theoretical constructs within the interventions using the Theoretical Domains Framework (TDFv2), (2) retrospective coding of behaviour change techniques (BCTs) using the BCT Taxonomy v1, and (3) an investigation of coherent links between the TDF domains and BCTs across the interventions. Of 20 studies included, only four reported an explicit theoretical basis to their intervention. However, TDF analysis revealed that nine of the 14 TDF domains were utilized, most commonly 'Knowledge' and 'Environmental context and resources'. The BCT analysis showed that all interventions contained at least one BCT, and 14 of 93 (15%) BCTs were coded, most commonly 'Information about health consequences', 'Credible source', and 'Instruction on how to perform the behaviour'. We identified nine relevant TDF domains and 14 BCTs used in these interventions. Only 15% of BCTs have been applied in AMR interventions thus providing a clear opportunity for the development of novel interventions in this context. This methodological approach provides a useful way of retrospectively mapping theoretical constructs and BCTs when reviewing studies that provide limited information on theory and intervention content. Statement of contribution What is already known on this subject? Evidence of the effectiveness of interventions that target the public to engage them with AMR is mixed; the public continue to show poor knowledge and misperceptions of AMR. Little is known about the common, active ingredients of AMR interventions targeting the public and information on explicit theoretical content is sparse. Information on the components of AMR public health interventions is urgently needed to enable the design of effective interventions to engage the public with AMR stewardship behaviour. What does this study add? The analysis shows very few studies reported any explicit theoretical basis to the interventions they described. Many interventions share common components, including core mechanisms of action and behaviour change techniques. The analysis suggests components of future interventions to engage the public with AMR. © 2018 The Authors. British Journal of Health Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
The post-buckling behavior of a beam constrained by springy walls
NASA Astrophysics Data System (ADS)
Katz, Shmuel; Givli, Sefi
2015-05-01
The post-buckling behavior of a beam subjected to lateral constraints is of practical importance in a variety of applications, such as stent procedures, filopodia growth in living cells, endoscopic examination of internal organs, and deep drilling. Even though in reality the constraining surfaces are often deformable, the literature has focused mainly on rigid and fixed constraints. In this paper, we make a first step to bridge this gap through a theoretical and experimental examination of the post-buckling behavior of a beam constrained by a fixed wall and a springy wall, i.e. one that moves laterally against a spring. The response exhibited by the proposed system is much richer compared to that of the fixed-wall system, and can be tuned by choosing the spring stiffness. Based on small-deformation analysis, we obtained closed-form analytical solutions and quantitative insights. The accuracy of these results was examined by comparison to large-deformation analysis. We concluded that the closed-form solution of the small-deformation analysis provides an excellent approximation, except in the highest attainable mode. There, the system exhibits non-intuitive behavior and non-monotonous force-displacement relations that can only be captured by large-deformation theories. Although closed-form solutions cannot be derived for the large-deformation analysis, we were able to reveal general properties of the solution. In the last part of the paper, we present experimental results that demonstrate various features obtained from the theoretical analysis.
Perspective: Markov models for long-timescale biomolecular dynamics.
Schwantes, C R; McGibbon, R T; Pande, V S
2014-09-07
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.
NASA Astrophysics Data System (ADS)
Liu, Lixian; Mandelis, Andreas; Melnikov, Alexander; Michaelian, Kirk; Huan, Huiting; Haisch, Christoph
2016-07-01
Air pollutants have adverse effects on the Earth's climate system. There is an urgent need for cost-effective devices capable of recognizing and detecting various ambient pollutants. An FTIR photoacoustic spectroscopy (FTIR-PAS) method based on a commercial FTIR spectrometer developed for air contamination monitoring will be presented. A resonant T-cell was determined to be the most appropriate resonator in view of the low-frequency requirement and space limitations in the sample compartment. Step-scan FTIR-PAS theory for regular cylinder resonator has been described as a reference for prediction of T-cell vibration principles. Both simulated amplitude and phase responses of the T-cell show good agreement with measurement data Carbon dioxide IR absorption spectra were used to demonstrate the capacity of the FTIR-PAS method to detect ambient pollutants. The theoretical detection limit for carbon dioxide was found to be 4 ppmv. A linear response to carbon dioxide concentration was found in the range from 2500 ppmv to 5000 ppmv. The results indicate that it is possible to use step-scan FTIR-PAS with a T-cell as a quantitative method for analysis of ambient contaminants.
Wang, Xiaozhen; Lu, Tianjian; Yu, Xin; Jin, Jian-Ming; Goddard, Lynford L
2017-07-04
We studied the nanoscale thermal expansion of a suspended resistor both theoretically and experimentally and obtained consistent results. In the theoretical analysis, we used a three-dimensional coupled electrical-thermal-mechanical simulation and obtained the temperature and displacement field of the suspended resistor under a direct current (DC) input voltage. In the experiment, we recorded a sequence of images of the axial thermal expansion of the central bridge region of the suspended resistor at a rate of 1.8 frames/s by using epi-illumination diffraction phase microscopy (epi-DPM). This method accurately measured nanometer level relative height changes of the resistor in a temporally and spatially resolved manner. Upon application of a 2 V step in voltage, the resistor exhibited a steady-state increase in resistance of 1.14 Ω and in relative height of 3.5 nm, which agreed reasonably well with the predicted values of 1.08 Ω and 4.4 nm, respectively.
Kinetics of Al + H2O reaction: theoretical study.
Sharipov, Alexander; Titova, Nataliya; Starik, Alexander
2011-05-05
Quantum chemical calculations were carried out to study the reaction of Al atom in the ground electronic state with H(2)O molecule. Examination of the potential energy surface revealed that the Al + H(2)O → AlO + H(2) reaction must be treated as a complex process involving two steps: Al + H(2)O → AlOH + H and AlOH + H → AlO + H(2). Activation barriers for these elementary reaction channels were calculated at B3LYP/6-311+G(3df,2p), CBS-QB3, and G3 levels of theory, and appropriate rate constants were estimated by using a canonical variational theory. Theoretical analysis exhibited that the rate constant for the Al + H(2)O → products reaction measured by McClean et al. must be associated with the Al + H(2)O → AlOH + H reaction path only. The process of direct HAlOH formation was found to be negligible at a pressure smaller than 100 atm.
Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy
2017-06-26
We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
The Definition, Rationale, and Effects of Thresholding in OCT Angiography.
Cole, Emily D; Moult, Eric M; Dang, Sabin; Choi, WooJhon; Ploner, Stefan B; Lee, ByungKun; Louzada, Ricardo; Novais, Eduardo; Schottenhamml, Julia; Husvogt, Lennart; Maier, Andreas; Fujimoto, James G; Waheed, Nadia K; Duker, Jay S
2017-01-01
To examine the definition, rationale, and effects of thresholding in OCT angiography (OCTA). A theoretical description of OCTA thresholding in combination with qualitative and quantitative analysis of the effects of OCTA thresholding in eyes from a retrospective case series. Four eyes were qualitatively examined: 1 from a 27-year-old control, 1 from a 78-year-old exudative age-related macular degeneration (AMD) patient, 1 from a 58-year-old myopic patient, and 1 from a 77-year-old nonexudative AMD patient with geographic atrophy (GA). One eye from a 75-year-old nonexudative AMD patient with GA was quantitatively analyzed. A theoretical thresholding model and a qualitative and quantitative description of the dependency of OCTA on thresholding level. Due to the presence of system noise, OCTA thresholding is a necessary step in forming OCTA images; however, thresholding can complicate the relationship between blood flow and OCTA signal. Thresholding in OCTA can cause significant artifacts, which should be considered when interpreting and quantifying OCTA images.
Steps wandering on the lysozyme and KDP crystals during growth in solution
NASA Astrophysics Data System (ADS)
Rashkovich, L. N.; Chernevich, T. G.; Gvozdev, N. V.; Shustin, O. A.; Yaminsky, I. V.
2001-10-01
We have applied atomic force microscopy for the study in solution of time evolution of step roughness on the crystal faces with high (pottasium dihydrophosphate: KDP) and low (lysozyme) density of kinks. It was found that the roughness increases with time revealing the time dependence as t1/4. Step velocity does not depend upon distance between steps, that is why the experimental data were interpreted on the basis of Voronkov theory, which assume, that the attachment and detachment of building units in the kinks is major limitation for crystal growth. In the frame of this theoretical model the calculation of material parameters is performed.
NASA Astrophysics Data System (ADS)
Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2003-11-01
CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (
Mathematical, theoretical and experimental confirmations of IRS and IBS by R.M. Santilli
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohale, Ritesh L.
The objective of present work is to put forward the Santilli’s experimental, physical and mathematical conception of IsoRedShift (IRS), IsoBlueShift (IBS) and NoIsoShift (NIS). Santilli has carried out a step-by step isotopic lifting of the physical laws of special relativity resulting in a new theory today specifically known Santilli isorelativity. In his 1991 hypothesis Santilli established the requirement to realize the light as electromagnetic waves propagating within a universal substratum. Furthermore Santilli has carried out a step-by step isotopic lifting of the physical laws of special relativity resulting in a new theory today specifically known Santilli isorelativity.
NASA Astrophysics Data System (ADS)
Alizadeh Savareh, Behrouz; Emami, Hassan; Hajiabadi, Mohamadreza; Ghafoori, Mahyar; Majid Azimi, Seyed
2018-03-01
Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.
Onto the stability analysis of hyperbolic secant-shaped Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Sabari, S.; Murali, R.
2018-05-01
We analyze the stability of the hyperbolic secant-shaped attractive Bose-Einstein condensate in the absence of external trapping potential. The appropriate theoretical model for the system is described by the nonlinear mean-field Gross-Pitaevskii equation with time varying two-body interaction effects. Using the variational method, the stability of the system is analyzed under the influence of time varying two-body interactions. Further we confirm that the stability of the attractive condensate increases by considering the hyperbolic secant-shape profile instead of Gaussian shape. The analytical results are compared with the numerical simulation by employing the split-step Crank-Nicholson method.
Guesmi, Hazar; Berthomieu, Dorothee; Bromley, Bryan; Coq, Bernard; Kiwi-Minsker, Lioubov
2010-03-28
The characterization of Fe/ZSM5 zeolite materials, the nature of Fe-sites active in N(2)O direct decomposition, as well as the rate limiting step are still a matter of debate. The mechanism of N(2)O decomposition on the binuclear oxo-hydroxo bridged extraframework iron core site [Fe(II)(mu-O)(mu-OH)Fe(II)](+) inside the ZSM-5 zeolite has been studied by combining theoretical and experimental approaches. The overall calculated path of N(2)O decomposition involves the oxidation of binuclear Fe(II) core sites by N(2)O (atomic alpha-oxygen formation) and the recombination of two surface alpha-oxygen atoms leading to the formation of molecular oxygen. Rate parameters computed using standard statistical mechanics and transition state theory reveal that elementary catalytic steps involved into N(2)O decomposition are strongly dependent on the temperature. This theoretical result was compared to the experimentally observed steady state kinetics of the N(2)O decomposition and temperature-programmed desorption (TPD) experiments. A switch of the reaction order with respect to N(2)O pressure from zero to one occurs at around 800 K suggesting a change of the rate determining step from the alpha-oxygen recombination to alpha-oxygen formation. The TPD results on the molecular oxygen desorption confirmed the mechanism proposed.
An Emerging Theoretical Model of Music Therapy Student Development.
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
The Kinematic Analysis of Flat Leverage Mechanism of the Third Class
NASA Astrophysics Data System (ADS)
Zhauyt, A.; Mamatova, G.; Abdugaliyeva, G.; Alipov, K.; Sakenova, A.; Alimbetov, A.
2017-10-01
It is necessary to make link mechanisms calculation to the strength at designing of flat link mechanisms of high class after definition of block diagrams and link linear sizes i.e. it is rationally to choose their forms and to determine the section sizes. The algorithm of the definition of dimension of link mechanism lengths of high classes (MHC) and their metric parameters at successive approach is offered in this work. It this paper educational and research software named GIM is presented. This software has been developed with the aim of approaching the difficulties students usually encounter when facing up to kinematic analysis of mechanisms. A deep understanding of the kinematic analysis is necessary to go a step further into design and synthesis of mechanisms. In order to support and complement the theoretical lectures, GIM software is used during the practical exercises, serving as an educational complementary tool reinforcing the knowledge acquired by the students.
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Review of finite fields: Applications to discrete Fourier, transforms and Reed-Solomon coding
NASA Technical Reports Server (NTRS)
Wong, J. S. L.; Truong, T. K.; Benjauthrit, B.; Mulhall, B. D. L.; Reed, I. S.
1977-01-01
An attempt is made to provide a step-by-step approach to the subject of finite fields. Rigorous proofs and highly theoretical materials are avoided. The simple concepts of groups, rings, and fields are discussed and developed more or less heuristically. Examples are used liberally to illustrate the meaning of definitions and theories. Applications include discrete Fourier transforms and Reed-Solomon coding.
One Step Forward, One Step Beck: A Contribution to the Ongoing Conceptual Debate in Youth Studies
ERIC Educational Resources Information Center
Roberts, Steven
2012-01-01
In a time of rapid and unprecedented social change, the concepts we use to make sense of the ways in which young people understand and interact with the world are very much under the microscope. Some researchers argue that we need to reinvigorate our conceptual repertoire, while others argue that our theoretical tool box still has the capacity to…
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
A novel thermal acoustic device based on porous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi
2016-01-15
A thermal acoustic (TA) device was fabricated by laser scribing technology. Polyimide (PI) can be converted into patterned porous graphene (PG) by laser’s irradiation in one step. The sound pressure level (SPL) of such TA device is related to laser power. The theoretical model of TA effect was established to analyze the relationship between the SPL and laser power. The theoretical results are in good agreement with experiment results. It was found that PG has a flat frequency response in the range of 5-20 kHz. This novel TA device has the advantages of one-step procedure, high flexibility, no mechanical vibration,more » low cost and so on. It can open wide applications in speakers, multimedia, medical, earphones, consumer electronics and many other aspects.« less
2016-09-08
Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM
Bastani, Roshan; Glenn, Beth A.; Taylor, Victoria M.; Nguyen, Tung T.; Stewart, Susan L.; Burke, Nancy J.; Chen, Moon S.
2014-01-01
Introduction Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Methods Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Results Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. Conclusions The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors. PMID:24784908
Brudnik, Katarzyna; Twarda, Maria; Sarzyński, Dariusz; Jodkowski, Jerzy T
2013-10-01
Ab initio calculations at the G3 level were used in a theoretical description of the kinetics and mechanism of the chlorine abstraction reactions from mono-, di-, tri- and tetra-chloromethane by chlorine atoms. The calculated profiles of the potential energy surface of the reaction systems show that the mechanism of the studied reactions is complex and the Cl-abstraction proceeds via the formation of intermediate complexes. The multi-step reaction mechanism consists of two elementary steps in the case of CCl4 + Cl, and three for the other reactions. Rate constants were calculated using the theoretical method based on the RRKM theory and the simplified version of the statistical adiabatic channel model. The temperature dependencies of the calculated rate constants can be expressed, in temperature range of 200-3,000 K as [Formula: see text]. The rate constants for the reverse reactions CH3/CH2Cl/CHCl2/CCl3 + Cl2 were calculated via the equilibrium constants derived theoretically. The kinetic equations [Formula: see text] allow a very good description of the reaction kinetics. The derived expressions are a substantial supplement to the kinetic data necessary to describe and model the complex gas-phase reactions of importance in combustion and atmospheric chemistry.
Duggan, P S; Siegel, A W; Blass, D M; Bok, H; Coyle, J T; Faden, R; Finkel, J; Gearhart, J D; Greely, H T; Hillis, A; Hoke, A; Johnson, R; Johnston, M; Kahn, J; Kerr, D; King, P; Kurtzberg, J; Liao, S M; McDonald, J W; McKhann, G; Nelson, K B; Rao, M; Regenberg, A; Smith, K; Solter, D; Song, H; Sugarman, J; Traystman, R J; Vescovi, A; Yanofski, J; Young, W; Mathews, D J H
2009-05-01
The prospect of using cell-based interventions (CBIs) to treat neurological conditions raises several important ethical and policy questions. In this target article, we focus on issues related to the unique constellation of traits that characterize CBIs targeted at the central nervous system. In particular, there is at least a theoretical prospect that these cells will alter the recipients' cognition, mood, and behavior-brain functions that are central to our concept of the self. The potential for such changes, although perhaps remote, is cause for concern and careful ethical analysis. Both to enable better informed consent in the future and as an end in itself, we argue that early human trials of CBIs for neurological conditions must monitor subjects for changes in cognition, mood, and behavior; further, we recommend concrete steps for that monitoring. Such steps will help better characterize the potential risks and benefits of CBIs as they are tested and potentially used for treatment.
NASA Technical Reports Server (NTRS)
Stefanescu, Doru M.; Juretzko, Frank R.; Dhindaw, Brij K.; Catalina, Adrian; Sen, Subhayu; Curreri, Peter A.
1998-01-01
Results of the directional solidification experiments on Particle Engulfment and Pushing by Solidifying Interfaces (PEP) conducted on the space shuttle Columbia during the Life and Microgravity Science Mission are reported. Two pure aluminum (99.999%) 9 mm cylindrical rods, loaded with about 2 vol.% 500 micrometers diameter zirconia particles were melted and resolidified in the microgravity (microg) environment of the shuttle. One sample was processed at step-wise increased solidification velocity, while the other at step-wise decreased velocity. It was found that a pushing-to-engulfment transition (PET) occurred in the velocity range of 0.5 to 1 micrometers. This is smaller than the ground PET velocity of 1.9 to 2.4 micrometers. This demonstrates that natural convection increases the critical velocity. A previously proposed analytical model for PEP was further developed. A major effort to identify and produce data for the surface energy of various interfaces required for calculation was undertaken. The predicted critical velocity for PET was of 0.775 micrometers/s.
Building high-quality assay libraries for targeted analysis of SWATH MS data.
Schubert, Olga T; Gillet, Ludovic C; Collins, Ben C; Navarro, Pedro; Rosenberger, George; Wolski, Witold E; Lam, Henry; Amodei, Dario; Mallick, Parag; MacLean, Brendan; Aebersold, Ruedi
2015-03-01
Targeted proteomics by selected/multiple reaction monitoring (S/MRM) or, on a larger scale, by SWATH (sequential window acquisition of all theoretical spectra) MS (mass spectrometry) typically relies on spectral reference libraries for peptide identification. Quality and coverage of these libraries are therefore of crucial importance for the performance of the methods. Here we present a detailed protocol that has been successfully used to build high-quality, extensive reference libraries supporting targeted proteomics by SWATH MS. We describe each step of the process, including data acquisition by discovery proteomics, assertion of peptide-spectrum matches (PSMs), generation of consensus spectra and compilation of MS coordinates that uniquely define each targeted peptide. Crucial steps such as false discovery rate (FDR) control, retention time normalization and handling of post-translationally modified peptides are detailed. Finally, we show how to use the library to extract SWATH data with the open-source software Skyline. The protocol takes 2-3 d to complete, depending on the extent of the library and the computational resources available.
Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia
2017-01-01
Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355
On the Connection between Kinetic Monte Carlo and the Burton-Cabrera-Frank Theory
NASA Astrophysics Data System (ADS)
Patrone, Paul; Margetis, Dionisios; Einstein, T. L.
2013-03-01
In the many years since it was first proposed, the Burton- Cabrera-Frank (BCF) model of step-flow has been experimentally established as one of the cornerstones of surface physics. However, many questions remain regarding the underlying physical processes and theoretical assumptions that give rise to the BCF theory. In this work, we formally derive the BCF theory from an atomistic, kinetic Monte Carlo model of the surface in 1 +1 dimensions with one step. Our analysis (i) shows how the BCF theory describes a surface with a low density of adsorbed atoms, and (ii) establishes a set of near-equilibrium conditions ensuring that the theory remains valid for all times. Support for PP was provided by the NIST-ARRA Fellowship Award No. 70NANB10H026 through UMD. Support for TLE and PP was also provided by the CMTC at UMD, with ancillary support from the UMD MRSEC. Support for DM was provided by NSF DMS0847587 at UMD.
Craig, Louise E; Taylor, Natalie; Grimley, Rohan; Cadilhac, Dominique A; McInnes, Elizabeth; Phillips, Rosemary; Dale, Simeon; O'Connor, Denise; Levi, Chris; Fitzgerald, Mark; Considine, Julie; Grimshaw, Jeremy M; Gerraty, Richard; Cheung, N Wah; Ward, Jeanette; Middleton, Sandy
2017-07-17
Theoretical frameworks and models based on behaviour change theories are increasingly used in the development of implementation interventions. Development of an implementation intervention is often based on the available evidence base and practical issues, i.e. feasibility and acceptability. The aim of this study was to describe the development of an implementation intervention for the T 3 Trial (Triage, Treatment and Transfer of patients with stroke in emergency departments (EDs)) using theory to recommend behaviour change techniques (BCTs) and drawing on the research evidence base and practical issues of feasibility and acceptability. A stepped method for developing complex interventions based on theory, evidence and practical issues was adapted using the following steps: (1) Who needs to do what, differently? (2) Using a theoretical framework, which barriers and enablers need to be addressed? (3) Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? A researcher panel was convened to review the list of BCTs recommended for use and to identify the most feasible and acceptable techniques to adopt. Seventy-six barriers were reported by hospital staff who attended the workshops (step 1: thirteen TDF domains likely to influence the implementation of the T 3 Trial clinical intervention were identified by the researchers; step 2: the researcher panellists then selected one third of the BCTs recommended for use as appropriate for the clinical context of the ED and, using the enabler workshop data, devised enabling strategies for each of the selected BCTs; and step 3: the final implementation intervention consisted of 27 BCTs). The TDF was successfully applied in all steps of developing an implementation intervention for the T 3 Trial clinical intervention. The use of researcher panel opinion was an essential part of the BCT selection process to incorporate both research evidence and expert judgment. It is recommended that this stepped approach (theory, evidence and practical issues of feasibility and acceptability) is used to develop highly reportable implementation interventions. The classifying of BCTs using recognised implementation intervention components will facilitate generalisability and sharing across different conditions and clinical settings.
Impact of user influence on information multi-step communication in a micro-blog
NASA Astrophysics Data System (ADS)
Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken
2014-06-01
User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.
NASA Astrophysics Data System (ADS)
Zolfigol, Mohammad Ali; Kiafar, Mahya; Yarie, Meysam; Taherpour, Avat(Arman); Fellowes, Thomas; Nicole Hancok, Amber; Yari, Ako
2017-06-01
Experimental and computational studies in the synthesis of 2-amino-4,6-diphenylnicotinonitrile using HBF4 as an oxidizing promoter catalyst under mild and solvent free conditions were carried out. The suggested anomeric based oxidation (ABO) mechanism is supported by experimental and theoretical evidence. The theoretical study shows that the intermediate isomers with 5R- and 5S- chiral positions have suitable structures for the aromatization through an anomeric based oxidation in the final step of the mechanistic pathway.
Academic Provenance: Mapping Geoscience Students' Academic Pathways to their Career Trajectories
NASA Astrophysics Data System (ADS)
Houlton, H. R.; Gonzales, L. M.; Keane, C. M.
2011-12-01
Targeted recruitment and retention efforts for the geosciences have become increasingly important with the growing concerns about program visibility on campuses, and given that geoscience degree production remains low relative to the demand for new geoscience graduates. Furthermore, understanding the career trajectories of geoscience degree recipients is essential for proper occupational placement. A theoretical framework was developed by Houlton (2010) to focus recruitment and retention efforts. This "pathway model" explicitly maps undergraduate students' geoscience career trajectories, which can be used to refine existing methods for recruiting students into particular occupations. Houlton's (2010) framework identified three main student population groups: Natives, Immigrants or Refugees. Each student followed a unique pathway, which consisted of six pathway steps. Each pathway step was comprised of critical incidents that influenced students' overall career trajectories. An aggregate analysis of students' pathways (Academic Provenance Analysis) showed that different populations' pathways exhibited a deviation in career direction: Natives indicated intentions to pursue industry or government sectors, while Immigrants intended to pursue academic or research-based careers. We expanded on Houlton's (2010) research by conducting a follow-up study to determine if the original participants followed the career trajectories they initially indicated in the 2010 study. A voluntary, 5-question, short-answer survey was administered via email. We investigated students' current pathway steps, pathway deviations, students' goals for the near future and their ultimate career ambitions. This information may help refine Houlton's (2010) "pathway model" and may aid geoscience employers in recruiting the new generation of professionals for their respective sectors.
von Hansen, Yann; Mehlich, Alexander; Pelz, Benjamin; Rief, Matthias; Netz, Roland R
2012-09-01
The thermal fluctuations of micron-sized beads in dual trap optical tweezer experiments contain complete dynamic information about the viscoelastic properties of the embedding medium and-if present-macromolecular constructs connecting the two beads. To quantitatively interpret the spectral properties of the measured signals, a detailed understanding of the instrumental characteristics is required. To this end, we present a theoretical description of the signal processing in a typical dual trap optical tweezer experiment accounting for polarization crosstalk and instrumental noise and discuss the effect of finite statistics. To infer the unknown parameters from experimental data, a maximum likelihood method based on the statistical properties of the stochastic signals is derived. In a first step, the method can be used for calibration purposes: We propose a scheme involving three consecutive measurements (both traps empty, first one occupied and second empty, and vice versa), by which all instrumental and physical parameters of the setup are determined. We test our approach for a simple model system, namely a pair of unconnected, but hydrodynamically interacting spheres. The comparison to theoretical predictions based on instantaneous as well as retarded hydrodynamics emphasizes the importance of hydrodynamic retardation effects due to vorticity diffusion in the fluid. For more complex experimental scenarios, where macromolecular constructs are tethered between the two beads, the same maximum likelihood method in conjunction with dynamic deconvolution theory will in a second step allow one to determine the viscoelastic properties of the tethered element connecting the two beads.
NASA Astrophysics Data System (ADS)
Xie, Weichang; Hagemeier, Sebastian; Bischoff, Jörg; Mastylo, Rostyslav; Manske, Eberhard; Lehmann, Peter
2017-06-01
Optical profilers are mature instruments used in research and industry to study surface topography features. Although the corresponding standards are based on simple step height measurements, in practical applications these instruments are often used to study the fidelity of surface topography. In this context it is well-known that in certain situations a surface profile obtained by an optical profiler will differ from the real profile. With respect to practical applications such deviations often occur in the vicinity of steep walls and in cases of high aspect ratio. In this contribution we compare the transfer characteristics of different 3D optical profiler principles, namely white-light interferometry, focus sensing, and confocal microscopy. Experimental results demonstrate that the transfer characteristics do not only depend on the parameters of the optical measurement system (e. g. wavelength and coherence of light, numerical aperture, evaluated signal feature, polarization) but also on the properties of the measuring object such as step height, aspect ratio, material properties and homogeneity, rounding and steepness of the edges, surface roughness. As a result, typical artefacts such as batwings occur for certain parameter combinations, particularly at certain height-to-wavelength ratio (HWR) values. Understanding of the mechanisms behind these phenomena enables to reduce them by an appropriate parameter adaption. However, it is not only the edge artefacts, but also the position of an edge that may be changed due to the properties of the measuring object. In order to investigate the relevant effects theoretically, several models are introduced. These are based on either an extension of Richards-Wolf modeling or rigorous coupled wave analysis (RCWA). Although these models explain the experimental effects quite well they suffer from different limitations, so that a quantitative correspondence of theoretical modeling and experimental results is hard to achieve. Nevertheless, these models are used to study the characteristics of the measured signals occurring at edges of different step height compared to signals occurring at plateaus. Moreover, a special calibration sample with continuous step height variation was developed to reduce the impact of unknown sample properties. We analyzed the signals in both, the spatial and the spatial frequency domain, and found systematic signal changes that will be discussed. As a consequence, these simulations will help to interpret measurement results appropriately and to improve them by proper parameter settings and calibration and finally to increase the edge detection accuracy.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
On comparison of net survival curves.
Pavlič, Klemen; Perme, Maja Pohar
2017-05-02
Relative survival analysis is a subfield of survival analysis where competing risks data are observed, but the causes of death are unknown. A first step in the analysis of such data is usually the estimation of a net survival curve, possibly followed by regression modelling. Recently, a log-rank type test for comparison of net survival curves has been introduced and the goal of this paper is to explore its properties and put this methodological advance into the context of the field. We build on the association between the log-rank test and the univariate or stratified Cox model and show the analogy in the relative survival setting. We study the properties of the methods using both the theoretical arguments as well as simulations. We provide an R function to enable practical usage of the log-rank type test. Both the log-rank type test and its model alternatives perform satisfactory under the null, even if the correlation between their p-values is rather low, implying that both approaches cannot be used simultaneously. The stratified version has a higher power in case of non-homogeneous hazards, but also carries a different interpretation. The log-rank type test and its stratified version can be interpreted in the same way as the results of an analogous semi-parametric additive regression model despite the fact that no direct theoretical link can be established between the test statistics.
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-01-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
NASA Astrophysics Data System (ADS)
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-07-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps).
NASA Astrophysics Data System (ADS)
Pang, Xiaomin; Wang, Xiaotao; Dai, Wei; Li, Haibing; Wu, Yinong; Luo, Ercang
2018-06-01
A compact and high efficiency cooler working at liquid hydrogen temperature has many important applications such as cooling superconductors and mid-infrared sensors. This paper presents a two-stage gas-coupled pulse tube cooler system with a completely co-axial configuration. A stepped warm displacer, working as the phase shifter for both stages, has been studied theoretically and experimentally in this paper. Comparisons with the traditional phase shifter (double inlet) are also made. Compared with the double inlet type, the stepped warm displacer has the advantages of recovering the expansion work from the pulse tube hot end (especially from the first stage) and easily realizing an appropriate phase relationship between the pressure wave and volume flow rate at the pulse tube hot end. Experiments are then carried out to investigate the performance. The pressure ratio at the compression space is maintained at 1.37, for the double inlet type, the system obtains 1.1 W cooling power at 20 K with 390 W acoustic power input and the relative Carnot efficiency is only 3.85%; while for the stepped warm displacer type, the system obtains 1.06 W cooling power at 20 K with only 224 W acoustic power input and the relative Carnot efficiency can reach 6.5%.
Artificial dielectric stepped-refractive-index lens for the terahertz region.
Hernandez-Serrano, A I; Mendis, Rajind; Reichel, Kimberly S; Zhang, Wei; Castro-Camus, E; Mittleman, Daniel M
2018-02-05
In this paper we theoretically and experimentally demonstrate a stepped-refractive-index convergent lens made of a parallel stack of metallic plates for terahertz frequencies based on artificial dielectrics. The lens consist of a non-uniformly spaced stack of metallic plates, forming a mirror-symmetric array of parallel-plate waveguides (PPWGs). The operation of the device is based on the TE 1 mode of the PPWG. The effective refractive index of the TE 1 mode is a function of the frequency of operation and the spacing between the plates of the PPWG. By varying the spacing between the plates, we can modify the local refractive index of the structure in every individual PPWG that constitutes the lens producing a stepped refractive index profile across the multi stack structure. The theoretical and experimental results show that this structure is capable of focusing a 1 cm diameter beam to a line focus of less than 4 mm for the design frequency of 0.18 THz. This structure shows that this artificial-dielectric concept is an important technology for the fabrication of next generation terahertz devices.
NASA Astrophysics Data System (ADS)
Arslan, N. Burcu; Kazak, Canan; Aydın, Fatma
2012-04-01
The title molecule (C19H17N5O4S·H2O) was synthesized and characterized by IR-NMR spectroscopy, MS and single-crystal X-ray diffraction. The molecular geometry, vibrational frequencies and gauge-independent atomic orbital (GIAO) 1H and 13C NMR chemical shift values of the compound in the ground state have been calculated by using the density functional theory (DFT) method with 6-31G(d) basis set, and compared with the experimental data. All the assignments of the theoretical frequencies were performed by potential energy distributions using VEDA 4 program. The calculated results show that the optimized geometries can well reproduce the crystal structural parameters, and the theoretical vibrational frequencies and 1H and 13C NMR chemical shift values show good agreement with experimental data. To determine conformational flexibility, the molecular energy profile of the title compound was obtained with respect to the selected torsion angle, which was varied from -180° to +180° in steps of 10°. Besides, molecular electrostatic potential (MEP), frontier molecular orbitals (FMO) analysis and thermodynamic properties of the compound were investigated by theoretical calculations.
Rasmussen, D. B.; Christensen, J. M.; Temel, B.; ...
2017-01-23
The reaction mechanism of dimethyl ether carbonylation to methyl acetate over mordenite was studied theoretically with periodic density functional theory calculations including dispersion forces and experimentally in a fixed bed flow reactor at pressures between 10 and 100 bar, dimethyl ether concentrations in CO between 0.2 and 2.0%, and at a temperature of 438 K. The theoretical study showed that the reaction of CO with surface methyl groups, the rate-limiting step, is faster in the eight-membered side pockets than in the twelve-membered main channel of the zeolite; the subsequent reaction of dimethyl ether with surface acetyl to form methyl acetatemore » was demonstrated to occur with low energy barriers in both the side pockets and in the main channel. Here, the present analysis has thus identified a path, where the entire reaction occurs favourably on a single site within the side pocket, in good agreement with previous experimental studies. The experimental study of the reaction kinetics was consistent with the theoretically derived mechanism and in addition revealed that the methyl acetate product inhibits the reaction – possibly by sterically hindering the attack of CO on the methyl groups in the side pockets.« less
Assessment of agricultural biomass potential to electricity generation in Riau Province
NASA Astrophysics Data System (ADS)
Papilo, P.; Kusumanto, I.; Kunaifi, K.
2017-05-01
Utilization of biomass as a source of electrical power is one potential solution that can be developed in order to increase of the electrification ratio and to Achieve the national energy security. However, now it is still difficult, to Determine the amount of potential energy that can be used as an alternative power generation. Therefore, as a preliminary step to assess the feasibility of biomass development as a power generation source, an analysis of potential resources are required, especially from some of the main commodities, both of residues of agriculture and plantation. This study aims to assessing the potential of biomass-based supply from unutilized resources that can be Obtained from the residues of agricultural and plantations sectors, such as rice straw and rice husk; Dry straw and chaff of rice; corn stalks and cobs; stalks of cassava; and fiber, shell, empty fruit Bunches, kernels and liquid wastes in the palm oil factories. More research is focused on the theoretical energy potential measurements using a statistical approach which has been developed by Biomass Energy Europe (BEE). Results of the assessment has been done and showed that the total theoretical biomass energy that can be produced is equal to 77,466,754.8 Gj year -1. Theoretically, this potential is equivalent to generate electricityof year 21,518,542.8 MWh -1.
Rodrigo, Olga; Caïs, Jordi; Monforte-Royo, Cristina
2017-07-01
In Spain, the introduction of the new Diploma in Nursing in 1977 saw the role of nurses shifting from that of medical assistants with technical skills to being independent members of the healthcare team with specific responsibility for providing professional nursing care. Here, we analyse the evolution of the nursing profession in Spain following the transfer of nurse education to universities, doing so through interviews with the first generation of academic tutors. This was a qualitative study using the method of analytic induction and based on the principles of grounded theory. Participants were selected by means of theoretical sampling and then underwent in-depth interviews. Steps were taken to ensure the credibility, transferability, dependability and confirmability of data. The main conclusion of the analysis is that there is a gap between a theoretical framework borrowed from the Anglo-American context and a nursing practice that, in Spain, has traditionally prioritised the application of technical procedures, a role akin to that of a medical assistant. It is argued that a key factor underlying the way in which nursing in Spain has evolved in recent decades is the lack of conceptual clarity regarding what the role of the professional nurse might actually entail in practice. © 2016 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Song, E-mail: yuessd@163.com; University of Chinese Academy of Sciences, Beijing 100049; Zhang, Zhao-chuan
In this paper, a sector steps approximation method is proposed to investigate the resonant frequencies of magnetrons with arbitrary side resonators. The arbitrary side resonator is substituted with a series of sector steps, in which the spatial harmonics of electromagnetic field are also considered. By using the method of admittance matching between adjacent steps, as well as field continuity conditions between side resonators and interaction regions, the dispersion equation of magnetron with arbitrary side resonators is derived. Resonant frequencies of magnetrons with five common kinds of side resonators are calculated with sector steps approximation method and computer simulation softwares, inmore » which the results have a good agreement. The relative error is less than 2%, which verifies the validity of sector steps approximation method.« less
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
Predicting orogenic wedge styles as a function of analogue erosion law and material softening
NASA Astrophysics Data System (ADS)
Mary, Baptiste C. L.; Maillot, Bertrand; Leroy, Yves M.
2013-10-01
The evolution of a compressive frictional wedge on a weak, frictional and planar décollement, subjected to frontal accretion, is predicted with a two step method called sequential limit analysis. The first step consists in finding, with the kinematic approach of limit analysis, the length of the active décollement and the dips of the emerging ramp and of the conjugate shear plane composing the emerging thrust fold. The second step leads to a modification of the geometry, first, because of the thrust fold development due to compression and, second, because of erosion. Erosion consists in removing periodically any material above a fictitious line at a selected slope, as done in analogue experiments. This application of sequential limit analysis generalizes the critical Coulomb wedge theory since it follows the internal deformation development. With constant frictional properties, the deformation is mostly diffuse, a succession of thrust folds being activated so that the topographic slope reaches exactly the theoretical, critical value. Frictional weakening on the ramps results in a deformation style composed of thrust sheets and horses. Applying an erosion slope at the critical topographic value leads to exhumation in the frontal, central, or rear region of the wedge depending on the erosion period and the weakening. Erosion at slopes slightly above or below the critical value results in exhumation toward the foreland or the hinterland, respectively, regardless of the erosion period. Exhumation is associated with duplexes, imbricate fans, antiformal stacks, and major backthrusting. Comparisons with sandbox experiments confirm that the thickness, dips, vergence, and exhumation of thrust sheets can be reproduced with friction and erosion parameters within realistic ranges of values.
Emotional influences on locomotor behavior.
Naugle, Kelly M; Joyner, Jessica; Hass, Chris J; Janelle, Christopher M
2010-12-01
Emotional responses to appetitive and aversive stimuli motivate approach and avoidance behaviors essential for survival. The purpose of the current study was to determine the impact of specific emotional stimuli on forward, approach-oriented locomotion. Steady state walking was assessed while participants walked toward pictures varying in emotional content (erotic, happy people, attack, mutilation, contamination, and neutral). Step length and step velocity were calculated for the first two steps following picture onset. Exposure to the mutilation and contamination pictures shortened the lengths of step one and step two compared to the erotic pictures. Additionally, step velocity was greater during exposure to the erotic pictures compared to (1) the contamination and mutilation pictures for step one and (2) all other picture categories for step two. These findings suggest that locomotion is facilitated when walking toward approach-oriented emotional stimuli but compromised when walking toward aversive emotional stimuli. The data extend our understanding of fundamental interactions among motivational orientations, emotional reactions, and resultant actions. Theoretical and practical implications are discussed. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cooperativity in plastic crystals
NASA Astrophysics Data System (ADS)
Pieruccini, Marco; Tombari, Elpidio
2018-03-01
A statistical mechanical model previously adopted for the analysis of the α -relaxation in structural glass formers is rederived within a general theoretical framework originally developed for systems approaching the ideal glassy state. The interplay between nonexponentiality and cooperativity is reconsidered in the light of energy landscape concepts. The method is used to estimate the cooperativity in orientationally disordered crystals, either from the analysis of literature data on linear dielectric response or from the enthalpy relaxation function obtained by temperature-modulated calorimetry. Knowledge of the specific heat step due to the freezing of the configurational or conformational modes at the glass transition is needed in order to properly account for the extent to which the relaxing system deviates from equilibrium during the rearrangement processes. A number of plastic crystals have been analyzed, and relatively higher cooperativities are found in the presence of hydrogen bonding interaction.
Peres Penteado, Alissa; Fábio Maciel, Rafael; Erbs, João; Feijó Ortolani, Cristina Lucia; Aguiar Roza, Bartira; Torres Pisa, Ivan
2015-01-01
The entire kidney transplantation process in Brazil is defined through laws, decrees, ordinances, and resolutions, but there is no defined theoretical map describing this process. From this representation it's possible to perform analysis, such as the identification of bottlenecks and information and communication technologies (ICTs) that support this process. The aim of this study was to analyze and represent the kidney transplantation workflow using business process modeling notation (BPMN) and then to identify the ICTs involved in the process. This study was conducted in eight steps, including document analysis and professional evaluation. The results include the BPMN model of the kidney transplantation process in Brazil and the identification of ICTs. We discovered that there are great delays in the process due to there being many different ICTs involved, which can cause information to be poorly integrated.
Using performance measurement to drive improvement: a road map for change.
Galvin, Robert S; McGlynn, Elizabeth A
2003-01-01
Performance measures and reporting have not been adopted throughout the US health care system despite their central role in encouraging increased participation by consumers in decision-making. Understanding whether the failure of measurement and reporting to diffuse throughout the health system can be overcome is critical for determining future policy in this area. To create a conceptual framework for analyzing the current rate of adoption and evaluating alternatives for accelerating adoption, and to recommend a set of concrete steps that can be taken to increase the use of performance measurement and reporting. Review of three theoretic models (Rogers, Prochaska/DiClemente, Gladwell), examination of the literature on previous experiences with quality measurement and reporting, and interviews with select stakeholders. The three theoretic models provide a valuable framework for understanding why the use of performance measures is stalled ("the circle of unaccountability") and for generating ideas about concrete steps that could be taken to accelerate adoption. Six steps are recommended: (1) raise public awareness, (2) redesign measures and reports, (3) make the delivery of information timely, (4) require public reporting, (5) develop and implement systems to reward quality, and (6) actively court leaders. The recommended six steps are interconnected; action on all will be required to drive significant acceleration in rates of adoption of performance measurement and reporting. Leadership and coordination are necessary to ensure these steps are taken and that they work in concert with one another.
A mechanism for leader stepping
NASA Astrophysics Data System (ADS)
Ebert, U.; Carlson, B. E.; Koehn, C.
2013-12-01
The stepping of negative leaders is well observed, but not well understood. A major problem consists of the fact that the streamer corona is typically invisible within a thunderstorm, but determines the evolution of a leader. Motivated by recent observations of streamer and leader formation in the laboratory by T.M.P. Briels, S. Nijdam, P. Kochkin, A.P.J. van Deursen et al., by recent simulations of these processes by J. Teunissen, A. Sun et al., and by our theoretical understanding of the process, we suggest how laboratory phenomena can be extrapolated to lightning leaders to explain the stepping mechanism.
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
Automated quantum operations in photonic qutrits
NASA Astrophysics Data System (ADS)
Borges, G. F.; Baldijão, R. D.; Condé, J. G. L.; Cabral, J. S.; Marques, B.; Terra Cunha, M.; Cabello, A.; Pádua, S.
2018-02-01
We report an experimental implementation of automated state transformations on spatial photonic qutrits following the theoretical proposal made by Baldijão et al. [Phys. Rev. A 96, 032329 (2017), 10.1103/PhysRevA.96.032329]. A qutrit state is simulated by using three Gaussian beams, and after some state operations, the transformed state is available in the end in terms of the basis state. The state transformation setup uses a spatial light modulator and a calcite-based interferometer. The results reveal the usefulness of the operation method. The experimental data show a good agreement with theoretical predictions, opening possibilities for explorations in higher dimensions and in a wide range of applications. This is a necessary step in qualifying spatial photonic qudits as a competitive setup for experimental research in the implementation of quantum algorithms which demand a large number of steps.
The role of learning-related dopamine signals in addiction vulnerability.
Huys, Quentin J M; Tobler, Philippe N; Hasler, Gregor; Flagel, Shelly B
2014-01-01
Dopaminergic signals play a mathematically precise role in reward-related learning, and variations in dopaminergic signaling have been implicated in vulnerability to addiction. Here, we provide a detailed overview of the relationship between theoretical, mathematical, and experimental accounts of phasic dopamine signaling, with implications for the role of learning-related dopamine signaling in addiction and related disorders. We describe the theoretical and behavioral characteristics of model-free learning based on errors in the prediction of reward, including step-by-step explanations of the underlying equations. We then use recent insights from an animal model that highlights individual variation in learning during a Pavlovian conditioning paradigm to describe overlapping aspects of incentive salience attribution and model-free learning. We argue that this provides a computationally coherent account of some features of addiction. © 2014 Elsevier B.V. All rights reserved.
Meng, Qingxi; Li, Ming
2012-08-01
Density functional theory (DFT) was used to investigate the Mo-catalyzed intramolecular Pauson-Khand reaction of 3-allyloxy-1-propynylphosphonates. All intermediates and transition states were optimized completely at the B3LYP/6-31 G(d,p) level [LANL2DZ(f) for Mo]. In the Mo-catalyzed intramolecular Pauson-Khand reaction, the C–C oxidative cyclization reaction was the chirality-determining step, and the reductive elimination reaction was the rate-determining step. The carbonyl insertion reaction into the Mo–C(sp(3)) bondwas easier than into the Mo–C=C bond. And the dominant product predicted theoretically was of (S)-chirality, which agreed with experimental data. This reaction was solventd ependent, and toluene was the best among the three solvents toluene, CH3CN, and THF.
INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)
NASA Astrophysics Data System (ADS)
Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.
2014-06-01
The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.; Larsen, Edward W.
2011-02-01
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
Rastogi, Tushar; Leder, Christoph; Kümmerer, Klaus
2014-09-01
The presences of micro-pollutants (active pharmaceutical ingredients, APIs) are increasingly seen as a challenge of the sustainable management of water resources worldwide due to ineffective effluent treatment and other measures for their input prevention. Therefore, novel approaches are needed like designing greener pharmaceuticals, i.e. better biodegradability in the environment. This study addresses a tiered approach of implementing green and sustainable chemistry principles for theoretically designing better biodegradable and pharmacologically improved pharmaceuticals. Photodegradation process coupled with LC-MS(n) analysis and in silico tools such as quantitative structure-activity relationships (QSAR) analysis and molecular docking proved to be a very significant approach for the preliminary stages of designing chemical structures that would fit into the "benign by design" concept in the direction of green and sustainable pharmacy. Metoprolol (MTL) was used as an example, which itself is not readily biodegradable under conditions found in sewage treatment and the aquatic environment. The study provides the theoretical design of new derivatives of MTL which might have the same or improved pharmacological activity and are more degradable in the environment than MTL. However, the in silico toxicity prediction by QSAR of those photo-TPs indicated few of them might be possibly mutagenic and require further testing. This novel approach of theoretically designing 'green' pharmaceuticals can be considered as a step forward towards the green and sustainable pharmacy field. However, more knowledge and further experience have to be collected on the full scope, opportunities and limitations of this approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
An AIDS risk reduction program for Dutch drug users: an intervention mapping approach to planning.
van Empelen, Pepijn; Kok, Gerjo; Schaalma, Herman P; Bartholomew, L Kay
2003-10-01
This article presents the development of a theory- and evidence-based AIDS prevention program targeting Dutch drug users and aimed at promoting condom use. The emphasis is on the development of the program using a five-step intervention development protocol called intervention mapping (IM). Preceding Step 1 of the IM process, an assessment of the HIV problem among drug users was conducted. The product of IM Step 1 was a series of program objectives specifying what drug users should learn in order to use condoms consistently. In Step 2, theoretical methods for influencing the most important determinants were chosen and translated into practical strategies that fit the program objectives. The main strategy chosen was behavioral journalism. In Step 3, leaflets with role-model stories based on authentic interviews with drug users were developed and pilot tested. Finally, the need for cooperation with program users is discussed in IM Steps 4 and 5.
Hermans, Ive; Jacobs, Pierre; Peeters, Jozef
2008-02-28
Abstraction of hydrogen atoms by pthalimide-N-oxyl radicals is an important step in the N-hydroxyphthalimide catalyzed autoxidation of hydrocarbons. In this contribution, the temperature dependency of this reaction is evaluated by a detailed transition state theory based kinetic analysis for the case of toluene. Tunneling was found to play a very important role, enhancing the rate constant by a factor of 20 at room temperature. As a result, tunneling, in combination with the existence of two distinct rotamers of the transition state, causes a pronounced temperature dependency of the pre-exponential frequency factor, and, as a consequence, marked curvature of the Arrhenius plot. This explains why earlier experimental studies over a limited temperature range around 300 K found formal Arrhenius activation energies and pre-factors that are 4 kcal mol(-1) and three orders of magnitude smaller than the actual energy barrier and the corresponding frequency factor, respectively. Also as a consequence of tunneling, substitution of a deuterium atom for a hydrogen atom causes a large decrease in the rate constant, in agreement with the measured kinetic isotope effects. The present theoretical analysis, complementary to the experimental rate coefficient data, allows for a reliable prediction of the rate coefficient at higher temperatures, relevant for actual autoxidation processes.
The ultraviolet morphology of evolved populations
NASA Astrophysics Data System (ADS)
Chávez, Miguel
2009-04-01
In this paper I present a summary of the recent investigations we have developed at the Stellar Atmospheres and Populations Research Group (GrAPEs-for its designation in Spanish) at INAOE and collaborators in Italy. These investigations have aimed at providing updated stellar tools for the analysis of the UV spectra of a variety of stellar aggregates, mainly evolved ones. The sequence of material here presented roughly corresponds to the steps we have identified as mandatory to properly establish the applicability of synthetic populations in the analyses of observational data of globular clusters and more complex aged aggregates. The sequence is composed of four main stages, namely, (a) the creation of a theoretical stellar data base that we have called UVBLUE, (b) the comparison of such data base with observational stellar data, (c) the calculation of a set of synthetic spectral energy distributions (SEDs) of simple stellar populations (SSPs) and their validation through a comparison with observations of a sample of galactic globular clusters (GGCs), (d) construction of models for dating local ellipticals and distant red-envelope galaxies. Most of the work relies on the analysis of absorption line spectroscopic indices. The global results are more than satisfactory in the sense that theoretical indices closely follow the overall trends with chemical composition depicted by their empirical counterparts (stars and GGCs).
Hydrodynamic Properties of Planing Surfaces and Flying Boats
NASA Technical Reports Server (NTRS)
Sokolov, N. A.
1950-01-01
The study of the hydrodynamic properties of planing bottom of flying boats and seaplane floats is at the present time based exclusively on the curves of towing tests conducted in tanks. In order to provide a rational basis for the test procedure in tanks and practical design data, a theoretical study must be made of the flow at the step and relations derived that show not only qualitatively but quantitatively the inter-relations of the various factors involved. The general solution of the problem of the development of hydrodynamic forces during the motion of the seaplane float or flying boat is very difficult for it is necessary to give a three-dimensional solution, which does not always permit reducing the analysis to the form of workable computation formulas. On the other had, the problem is complicated by the fact that the object of the analysis is concerned with two fluid mediums, namely, air and water, which have a surface of density discontinuity between them. The theoretical and experimental investigations on the hydrodynamics of a ship cannot be completely carried over to the design of floats and flying-boat hulls, because of the difference in the shape of the contour lines of the bodies, and, because of the entirely different flow conditions from the hydrodynamic viewpoint.
Translating context to causality in cardiovascular disparities research.
Benn, Emma K T; Goldfeld, Keith S
2016-04-01
Moving from a descriptive focus to a comprehensive analysis grounded in causal inference can be particularly daunting for disparities researchers. However, even a simple model supported by the theoretical underpinnings of causality gives researchers a better chance to make correct inferences about possible interventions that can benefit our most vulnerable populations. This commentary provides a brief description of how race/ethnicity and context relate to questions of causality, and uses a hypothetical scenario to explore how different researchers might analyze the data to estimate causal effects of interest. Perhaps although not entirely removed of bias, these causal estimates will move us a step closer to understanding how to intervene. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Wygant, M.
2015-12-01
As droughts continue to impact businesses and communities throughout the United States, there needs to be a greater emphasis on drought communication through interdisciplinary approaches, risk communication, and digital platforms. The purpose of this research is to provide an overview of the current literature on communicating drought and suggests areas for further improvement. Specifically, this research focuses on communicating drought through social media platforms such as Facebook, Twitter, and Instagram. It also focuses on the conglomeration of theoretical frameworks within the realm of risk communication, to provide a strong foundation towards future drought communication. This research proposal provides a critical step to advocate for paradigmatic shifts within natural hazard communication.
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
Naphtyl- and pyrenyl-flavylium dyads: Synthesis, DFT and optical properties
NASA Astrophysics Data System (ADS)
Aguilar-Castillo, Bethsy Adriana; Sánchez-Bojorge, Nora Aydee; Chávez-Flores, David; Camacho-Dávila, Alejandro A.; Pasillas-Ornelas, Eddie; Rodríguez-Valdez, Luz-María; Zaragoza-Galán, Gerardo
2018-03-01
A one-step preparation of flavylium salts containing naphtyl and pyrenyl moieties is described hereafter. Flavylium salts were successfully characterized by 1H NMR spectroscopy and ESI-MS spectrometry. Theoretical calculations were carried out by means of Density Functional Theory in order to simulate flavylium cation electronic transitions. Molecular simulation of -naphtyl derivatives displayed a coplanar conformation between naphthalene and benzopyrylium moieties. In contrast, DFT analysis exhibited a non-coplanar arrangement of pyrene and benzopyrylium units. These former statements in coherence with the absorption experiments where the naphtyl-flavylium dyads shows a red-shifted maximum absorption band with respect to pyrene dyads, led us to conclude that these bathochromic effects are associated with a more planar conformation.
NASA Astrophysics Data System (ADS)
Lukeš, Vladimír; Škorňa, Peter; Michalík, Martin; Klein, Erik
2017-11-01
Various para, meta and ortho substituted formanilides have been theoretically studied. For trans and cis-isomers of non-substituted formanilide, the calculated B3LYP vibration normal modes were analyzed. Substituent effect on the selected normal modes was described and the comparison with the available experimental data is presented. The calculated B3LYP proton affinities were correlated with Hammett constants, Fujita-Nishioka equation and the rate constants of the hydrolysis in 1 M HCl. Found linear dependences allow predictions of dissociation constants (pKBH+) and hydrolysis rate constants. Obtained results indicate that protonation of amide group may represent the rate determining step of acid catalyzed hydrolysis.
Al-Hashimi, Nessreen A; Hussein, Yasser H A
2010-01-01
The charge transfer (CT) interaction between iodine and 2,3-diaminopyridine (DAPY) has been thoroughly investigated via theoretical calculations. A Hartree-Fock, 3-21G level of theory was used to optimize and calculate the Mullican charge distribution scheme as well as the vibrational frequencies of DAPY alone and both its CT complexes with one and two iodine molecules. A very good agreement was found between experiment and theory. New illustrations were concluded with a deep analysis and description for the vibrational frequencies of the formed CT complexes. The two-step CT complex formation mechanism published earlier was supported. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Campbell, W.
1981-01-01
A theoretical evaluation of the stability of an explicit finite difference solution of the transient temperature field in a composite medium is presented. The grid points of the field are assumed uniformly spaced, and media interfaces are either vertical or horizontal and pass through grid points. In addition, perfect contact between different media (infinite interfacial conductance) is assumed. A finite difference form of the conduction equation is not valid at media interfaces; therefore, heat balance forms are derived. These equations were subjected to stability analysis, and a computer graphics code was developed that permitted determination of a maximum time step for a given grid spacing.
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva
2010-03-01
This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.
NASA Astrophysics Data System (ADS)
Tulbure, Mirela G.; Kininmonth, Stuart; Broich, Mark
2014-11-01
The concept of habitat networks represents an important tool for landscape conservation and management at regional scales. Previous studies simulated degradation of temporally fixed networks but few quantified the change in network connectivity from disintegration of key features that undergo naturally occurring spatiotemporal dynamics. This is particularly of concern for aquatic systems, which typically show high natural spatiotemporal variability. Here we focused on the Swan Coastal Plain, a bioregion that encompasses a global biodiversity hotspot in Australia with over 1500 water bodies of high biodiversity. Using graph theory, we conducted a temporal analysis of water body connectivity over 13 years of variable climate. We derived large networks of surface water bodies using Landsat data (1999-2011). We generated an ensemble of 278 potential networks at three dispersal distances approximating the maximum dispersal distance of different water dependent organisms. We assessed network connectivity through several network topology metrics and quantified the resilience of the network topology during wet and dry phases. We identified ‘stepping stone’ water bodies across time and compared our networks with theoretical network models with known properties. Results showed a highly dynamic seasonal pattern of variability in network topology metrics. A decline in connectivity over the 13 years was noted with potential negative consequences for species with limited dispersal capacity. The networks described here resemble theoretical scale-free models, also known as ‘rich get richer’ algorithm. The ‘stepping stone’ water bodies are located in the area around the Peel-Harvey Estuary, a Ramsar listed site, and some are located in a national park. Our results describe a powerful approach that can be implemented when assessing the connectivity for a particular organism with known dispersal distance. The approach of identifying the surface water bodies that act as ‘stepping stone’ over time may help prioritize surface water bodies that are essential for maintaining regional scale connectivity.
Machado, Juliana Pereira; Veiga, Eugenia Velludo; Ferreira, Paulo Alexandre Camargo; Martins, José Carlos Amado; Daniel, Ana Carolina Queiroz Godoy; Oliveira, Amanda dos Santos; da Silva, Patrícia Costa dos Santos
2014-01-01
Objective To determine and to analyze the theoretical and practical knowledge of Nursing professionals on indirect blood pressure measurement. Methods This cross-sectional study included 31 professionals of a coronary care unit (86% of the Nursing staff in the unit). Of these, 38.7% of professionals were nurses and 61.3% nurse technicians. A validated questionnaire was used to theoretical evaluation and for practice assessment the auscultatory technique was applied in a simulation environment, under a non-participant observation. Results To the theoretical knowledge of the stages of preparation of patient and environment, 12.9% mentioned 5-minute of rest, 48.4% checked calibration, and 29.0% chose adequate cuff width. A total of 64.5% of professionals avoided rounding values, and 22.6% mentioned the 6-month deadline period for the equipment calibration. On average, in practice assessment, 65% of the steps were followed. Lacks in knowledge were primary concerning lack of checking the device calibration and stethoscope, measurement of arm circumference to choose the cuff size, and the record of arm used in blood pressure measurement. Conclusion Knowledge was poor and had disparities between theory and practice with evidence of steps taken without proper awareness and lack of consideration of important knowledge during implementation of blood pressure measurement. Educational and operational interventions should be applied systematically with institutional involvement to ensure safe care with reliable values. PMID:25295455
A New Approach to Extract Forest Water Use Efficiency from Eddy Covariance Data
NASA Astrophysics Data System (ADS)
Scanlon, T. M.; Sulman, B. N.
2016-12-01
Determination of forest water use efficiency (WUE) from eddy covariance data typically involves the following steps: (a) estimating gross primary productivity (GPP) from direct measurements of net ecosystem exchange (NEE) by extrapolating nighttime ecosystem respiration (ER) to daytime conditions, and (b) assuming direct evaporation (E) is minimal several days after rainfall, meaning that direct measurements of evapotranspiration (ET) are identical to transpiration (T). Both of these steps could lead to errors in the estimation of forest WUE. Here, we present a theoretical approach for estimating WUE through the analysis of standard eddy covariance data, which circumvents these steps. Only five statistics are needed from the high-frequency time series to extract WUE: CO2 flux, water vapor flux, standard deviation in CO2 concentration, standard deviation in water vapor concentration, and the correlation coefficient between CO2 and water vapor concentration for each half-hour period. The approach is based on the assumption that stomatal fluxes (i.e. photosynthesis and transpiration) lead to perfectly negative correlations and non-stomatal fluxes (i.e. ecosystem respiration and direct evaporation) lead to perfectly positive correlations within the CO2 and water vapor high frequency time series measured above forest canopies. A mathematical framework is presented, followed by a proof of concept using eddy covariance data and leaf-level measurements of WUE.
Dataset for an analysis of communicative aspects of finance.
Natalya Zavyalova
2017-04-01
The article describes a step-by-step strategy for designing a universal comprehensive vision of a vast majority of financial research topics. The strategy is focused around the analysis of the retrieval results of the word processing system Serelex which is based on the semantic similarity measure. While designing a research topic, scientists usually employ their individual background. They rely in most cases on their individual assumptions and hypotheses. The strategy, introduced in the article, highlights the method of identifying components of semantic maps which can lead to a better coverage of any scientific topic under analysis. On the example of the research field of finance we show the practical and theoretical value of semantic similarity measurements, i.e., a better coverage of the problems which might be included in the scientific analysis of financial field. At the designing stage of any research scientists are not immune to an insufficient and, thus, erroneous spectrum of problems under analysis. According to the famous maxima of St. Augustine, 'Fallor ergo sum', the researchers' activities are driven along the way from one mistake to another. However, this might not be the case for the 21st century science approach. Our strategy offers an innovative methodology, according to which the number of mistakes at the initial stage of any research may be significantly reduced. The data, obtained, was used in two articles (N. Zavyalova, 2017) [7], (N. Zavyalova, 2015) [8]. The second stage of our experiment was driven towards analyzing the correlation between the language and income level of the respondents. The article contains the information about data processing.
Theoretical study of water-gas shift reaction on the silver nanocluster
NASA Astrophysics Data System (ADS)
Arab, Ali; Sharafie, Darioush; Fazli, Mostafa
2017-10-01
The kinetics of water gas shift reaction (WGSR) on the silver nanocluster was investigated using density functional theory according to the carboxyl associative mechanism. The hybrid B3PW91 functional along with the 6-31+G* and LANL2DZ basis sets were used throughout the calculations. It was observed that CO and H2O molecules adsorb physically on the Ag5 cluster without energy barrier as the initial steps of WGSR. The next three steps including H2Oads dissociation, carboxyl (OCOHads) formation, and CO2(ads) formation were accompanied by activation barrier. Transition states, as well as energy profiles of these three steps, were determined and analyzed. Our results revealed that the carboxyl and CO2(ads) formation were fast steps whereas H2Oads dissociation was the slowest step of WGSR.
Gu, Di; Gao, Simeng; Jiang, TingTing; Wang, Baohui
2017-03-15
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO 2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process.
Gu, Di; Gao, Simeng; Jiang, TingTing; Wang, Baohui
2017-01-01
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process. PMID:28294180
NASA Astrophysics Data System (ADS)
Gu, Di; Gao, Simeng; Jiang, Tingting; Wang, Baohui
2017-03-01
To match the relentless pursuit of three research hot points - efficient solar utilization, green and sustainable remediation of wastewater and advanced oxidation processes, solar-mediated thermo-electrochemical oxidation of surfactant was proposed and developed for green remediation of surfactant wastewater. The solar thermal electrochemical process (STEP), fully driven with solar energy to electric energy and heat and without an input of other energy, sustainably serves as efficient thermo-electrochemical oxidation of surfactant, exemplified by SDBS, in wastewater with the synergistic production of hydrogen. The electrooxidation-resistant surfactant is thermo-electrochemically oxidized to CO2 while hydrogen gas is generated by lowing effective oxidation potential and suppressing the oxidation activation energy originated from the combination of thermochemical and electrochemical effect. A clear conclusion on the mechanism of SDBS degradation can be proposed and discussed based on the theoretical analysis of electrochemical potential by quantum chemical method and experimental analysis of the CV, TG, GC, FT-IR, UV-vis, Fluorescence spectra and TOC. The degradation data provide a pilot for the treatment of SDBS wastewater that appears to occur via desulfonation followed by aromatic-ring opening. The solar thermal utilization that can initiate the desulfonation and activation of SDBS becomes one key step in the degradation process.
Theoretical Analysis on Mechanical Deformation of Membrane-Based Photomask Blanks
NASA Astrophysics Data System (ADS)
Marumoto, Kenji; Aya, Sunao; Yabe, Hedeki; Okada, Tatsunori; Sumitani, Hiroaki
2012-04-01
Membrane-based photomask is used in proximity X-ray lithography including that in LIGA (Lithographie, Galvanoformung und Abformung) process, and near-field photolithography. In this article, out-of-plane deformation (OPD) and in-plane displacement (IPD) of membrane-based photomask blanks are theoretically analyzed to obtain the mask blanks with flat front surface and low stress absorber film. First, we derived the equations of OPD and IPD for the processing steps of membrane-based photomask such as film deposition, back-etching and bonding, using a theory of symmetrical bending of circular plates with a coaxial circular hole and that of deformation of cylinder under hydrostatic pressure. The validity of the equations was proved by comparing the calculation results with experimental ones. Using these equations, we investigated the relation between the geometry of the mask blanks and the distortions generally, and gave the criterion to attain the flat front surface. Moreover, the absorber stress-bias required to obtain zero-stress on finished mask blanks was also calculated and it has been found that only little stress-bias was required for adequate hole size of support plate.
Spectral Mass Gauging of Unsettled Liquid with Acoustic Waves
NASA Technical Reports Server (NTRS)
Feller, Jeffrey; Kashani, Ali; Khasin, Michael; Muratov, Cyrill; Osipov, Viatcheslav; Sharma, Surendra
2018-01-01
Propellant mass gauging is one of the key technologies required to enable the next step in NASA's space exploration program. At present, there is no reliable method to accurately measure the amount of unsettled liquid propellant of an unknown configuration in a propellant tank in micro- or zero gravity. We propose a new approach to use sound waves to probe the resonance frequencies of the two-phase liquid-gas mixture and take advantage of the mathematical properties of the high frequency spectral asymptotics to determine the volume fraction of the tank filled with liquid. We report the current progress in exploring the feasibility of this approach, both experimental and theoretical. Excitation and detection procedures using solenoids for excitation and both hydrophones and accelerometers for detection have been developed. A 3% uncertainty for mass-gauging was demonstrated for a 200-liter tank partially filled with water for various unsettled configurations, such as tilts and artificial ullages. A new theoretical formula for the counting function associated with axially symmetric modes was derived. Scaling analysis of the approach has been performed to predict an adequate performance for in-space applications.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Enhancement of optical polarization degree of AlGaN quantum wells by using staggered structure.
Wang, Weiying; Lu, Huimin; Fu, Lei; He, Chenguang; Wang, Mingxing; Tang, Ning; Xu, Fujun; Yu, Tongjun; Ge, Weikun; Shen, Bo
2016-08-08
Staggered AlGaN quantum wells (QWs) are designed to enhance the transverse-electric (TE) polarized optical emission in deep ultraviolet (DUV) light- emitting diodes (LED). The optical polarization properties of the conventional and staggered AlGaN QWs are investigated by a theoretical model based on the k·p method as well as polarized photoluminescence (PL) measurements. Based on an analysis of the valence subbands and momentum matrix elements, it is found that AlGaN QWs with step-function-like Al content in QWs offers much stronger TE polarized emission in comparison to that from conventional AlGaN QWs. Experimental results show that the degree of the PL polarization at room temperature can be enhanced from 20.8% of conventional AlGaN QWs to 40.2% of staggered AlGaN QWs grown by MOCVD, which is in good agreement with the theoretical simulation. It suggests that polarization band engineering via staggered AlGaN QWs can be well applied in high efficiency AlGaN-based DUV LEDs.
Information Fluxes as Concept for Categorizations of Life
NASA Astrophysics Data System (ADS)
Hildenbrand, Georg; Hausmann, M.
2012-05-01
Definitions of life are controversially discussed; however, they are mostly depending on bio- evolutionary driven arguments. Here, we propose a systematic, theoretical approach to the question what life is, by categorization and classification of different levels of life. This approach is mainly based on the analysis of information flux occurring in systems being suspicious to be alive, and on the analysis of their power of environmental control. In a first step, we show that all biological definitions of life can be derived from basic physical principles of entropy (number of possible states of a thermodynamic system) and of the energy needed for controlling entropic development. In a next step we discuss how any process where information flux is generated, regardless of its materialization is defined and related to classical definitions of life. In a third step we resume the proposed classification scheme in its most basic way, looking only for existence of data storage, its processing, and its environmental control. We join inhere a short discussion how the materialization of information fluxes can take place depending on the special properties of the four basic physical forces. Having done all this we are able to give everybody a classification catalogue at hand that one can categorize the kind of life one is talking about, thus overcoming the obstacles deriving from the simple appearing question whether something is alive or not. On its most basic level as presented here, our scheme offers a categorization for fire, crystals, prions, viruses, spores, up to cells and even tardigrada and cryostases.
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Fast-response free-running dc-to-dc converter employing a state-trajectory control law
NASA Technical Reports Server (NTRS)
Huffman, S. D.; Burns, W. W., III; Wilson, T. G.; Owen, H. A., Jr.
1977-01-01
A recently proposed state-trajectory control law for a family of energy-storage dc-to-dc converters has been implemented for the voltage step-up configuration. Two methods of realization are discussed; one employs a digital processor and the other uses analog computational circuits. Performance characteristics of experimental voltage step-up converters operating under the control of each of these implementations are reported and compared to theoretical predictions and computer simulations.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Geometrical control of ionic current rectification in a configurable nanofluidic diode.
Alibakhshi, Mohammad Amin; Liu, Binqi; Xu, Zhiping; Duan, Chuanhua
2016-09-01
Control of ionic current in a nanofluidic system and development of the elements analogous to electrical circuits have been the subject of theoretical and experimental investigations over the past decade. Here, we theoretically and experimentally explore a new technique for rectification of ionic current using asymmetric 2D nanochannels. These nanochannels have a rectangular cross section and a stepped structure consisting of a shallow and a deep side. Control of height and length of each side enables us to obtain optimum rectification at each ionic strength. A 1D model based on the Poisson-Nernst-Planck equation is derived and validated against the full 2D numerical solution, and a nondimensional concentration is presented as a function of nanochannel dimensions, surface charge, and the electrolyte concentration that summarizes the rectification behavior of such geometries. The rectification factor reaches a maximum at certain electrolyte concentration predicted by this nondimensional number and decays away from it. This method of fabrication and control of a nanofluidic diode does not require modification of the surface charge and facilitates the integration with lab-on-a-chip fluidic circuits. Experimental results obtained from the stepped nanochannels are in good agreement with the 1D theoretical model.
NASA Astrophysics Data System (ADS)
Zhu, Qing; Zou, Lianfeng; Zhou, Guangwen; Saidi, Wissam A.; Yang, Judith C.
2016-10-01
Understanding of metal oxidation is critical to corrosion control, catalysis synthesis, and advanced materials engineering. Although, metal oxidation process is rather complicated, different processes, many of them coupled, are involved from the onset of reaction. Since first introduced, there has been great success in applying heteroepitaxial theory to the oxide growth on a metal surface as demonstrated in the Cu oxidation experiments. In this paper, we review the recent progress in experimental findings on Cu oxidation as well as the advances in the theoretical simulations of the Cu oxidation process. We focus on the effects of defects such as step edges, present on realistic metal surfaces, on the oxide growth dynamics. We show that the surface steps can change the mass transport of both Cu and O atoms during oxide growth, and ultimately lead to the formation of different oxide morphology. We also review the oxidation of Cu alloys and explore the effect of a secondary element to the oxide growth on a Cu surface. From the review of the work on Cu oxidation, we demonstrate the correlation of theoretical simulations at multiple scales with various experimental techniques.
A qualitative method for analysing multivoicedness
Aveling, Emma-Louise; Gillespie, Alex; Cornish, Flora
2015-01-01
‘Multivoicedness’ and the ‘multivoiced Self’ have become important theoretical concepts guiding research. Drawing on the tradition of dialogism, the Self is conceptualised as being constituted by a multiplicity of dynamic, interacting voices. Despite the growth in literature and empirical research, there remains a paucity of established methodological tools for analysing the multivoiced Self using qualitative data. In this article, we set out a systematic, practical ‘how-to’ guide for analysing multivoicedness. Using theoretically derived tools, our three-step method comprises: identifying the voices of I-positions within the Self’s talk (or text), identifying the voices of ‘inner-Others’, and examining the dialogue and relationships between the different voices. We elaborate each step and illustrate our method using examples from a published paper in which data were analysed using this method. We conclude by offering more general principles for the use of the method and discussing potential applications. PMID:26664292
Frequency domain analysis of errors in cross-correlations of ambient seismic noise
NASA Astrophysics Data System (ADS)
Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri
2016-12-01
We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.
Szalma, James L
2014-12-01
Motivation is a driving force in human-technology interaction. This paper represents an effort to (a) describe a theoretical model of motivation in human technology interaction, (b) provide design principles and guidelines based on this theory, and (c) describe a sequence of steps for the. evaluation of motivational factors in human-technology interaction. Motivation theory has been relatively neglected in human factors/ergonomics (HF/E). In both research and practice, the (implicit) assumption has been that the operator is already motivated or that motivation is an organizational concern and beyond the purview of HF/E. However, technology can induce task-related boredom (e.g., automation) that can be stressful and also increase system vulnerability to performance failures. A theoretical model of motivation in human-technology interaction is proposed, based on extension of the self-determination theory of motivation to HF/E. This model provides the basis for both future research and for development of practical recommendations for design. General principles and guidelines for motivational design are described as well as a sequence of steps for the design process. Human motivation is an important concern for HF/E research and practice. Procedures in the design of both simple and complex technologies can, and should, include the evaluation of motivational characteristics of the task, interface, or system. In addition, researchers should investigate these factors in specific human-technology domains. The theory, principles, and guidelines described here can be incorporated into existing techniques for task analysis and for interface and system design.
French, Simon D; Green, Sally E; O'Connor, Denise A; McKenzie, Joanne E; Francis, Jill J; Michie, Susan; Buchbinder, Rachelle; Schattner, Peter; Spike, Neil; Grimshaw, Jeremy M
2012-04-24
There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process.
2012-01-01
Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process. PMID:22531013
Metal hydride hydrogen compression: recent advances and future prospects
NASA Astrophysics Data System (ADS)
Yartys, Volodymyr A.; Lototskyy, Mykhaylo; Linkov, Vladimir; Grant, David; Stuart, Alastair; Eriksen, Jon; Denys, Roman; Bowman, Robert C.
2016-04-01
Metal hydride (MH) thermal sorption compression is one of the more important applications of the MHs. The present paper reviews recent advances in the field based on the analysis of the fundamental principles of this technology. The performances when boosting hydrogen pressure, along with two- and three-step compression units, are analyzed. The paper includes also a theoretical modelling of a two-stage compressor aimed at describing the performance of the experimentally studied systems, their optimization and design of more advanced MH compressors. Business developments in the field are reviewed for the Norwegian company HYSTORSYS AS and the South African Institute for Advanced Materials Chemistry. Finally, future prospects are outlined presenting the role of the MH compression in the overall development of the hydrogen-driven energy systems. The work is based on the analysis of the development of the technology in Europe, USA and South Africa.
NASA Astrophysics Data System (ADS)
Ercikan, Kadriye; Alper, Naim
2009-03-01
This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they may apply to adaptation of instructional materials. The authors provide a good theoretical analysis of what took place in two translation instances and make an important contribution by taking the first step in providing a systematic discussion of adaptation of instructional materials. Our discussion proposes procedures for adapting instructional materials for examining equivalence of source and target versions of adapted instructional materials. We highlight that many of the procedures and criteria used in examining comparability of educational tests is missing in this emerging research of area.
An oilspill trajectory analysis model with a variable wind deflection angle
Samuels, W.B.; Huang, N.E.; Amstutz, D.E.
1982-01-01
The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.
Drop-on-Demand Single Cell Isolation and Total RNA Analysis
Moon, Sangjun; Kim, Yun-Gon; Dong, Lingsheng; Lombardi, Michael; Haeggstrom, Edward; Jensen, Roderick V.; Hsiao, Li-Li; Demirci, Utkan
2011-01-01
Technologies that rapidly isolate viable single cells from heterogeneous solutions have significantly contributed to the field of medical genomics. Challenges remain both to enable efficient extraction, isolation and patterning of single cells from heterogeneous solutions as well as to keep them alive during the process due to a limited degree of control over single cell manipulation. Here, we present a microdroplet based method to isolate and pattern single cells from heterogeneous cell suspensions (10% target cell mixture), preserve viability of the extracted cells (97.0±0.8%), and obtain genomic information from isolated cells compared to the non-patterned controls. The cell encapsulation process is both experimentally and theoretically analyzed. Using the isolated cells, we identified 11 stem cell markers among 1000 genes and compare to the controls. This automated platform enabling high-throughput cell manipulation for subsequent genomic analysis employs fewer handling steps compared to existing methods. PMID:21412416
Metal hydride hydrogen compression: Recent advances and future prospects
Bowman, Jr., Robert C.; Yartys, Volodymyr A.; Lototskyy, Mykhaylo V.; ...
2016-03-17
Metal hydride (MH) thermal sorption compression is one of the more important applications of the metal hydrides. The present paper reviews recent advances in the field based on the analysis of the fundamental principles of this technology. The performances when boosting hydrogen pressure, along with two- and three-step compression units are analyzed. The paper includes also a theoretical modeling of a two-stage compressor aimed at both describing the performance of the experimentally studied systems, but, also, on their optimization and design of more advanced MH compressors. Business developments in the field are reviewed for the Norwegian company HYSTORSYS AS andmore » the South African Institute for Advanced Materials Chemistry. Finally, future prospects are outlined presenting the role of the metal hydride compression in the overall development of the hydrogen driven energy systems. Lastly, the work is based on the analysis of the development of the technology in Europe, USA and South Africa.« less
Design and development of the mobile game based on the J2ME technology
NASA Astrophysics Data System (ADS)
He, Junhua
2011-12-01
With the continuous improvement of mobile performance, mobile entertainment applications market trend has been increasingly clear, mobile entertainment applications will be after the PC entertainment applications is another important business growth. Through the full analysis of the current mobile entertainment applications market demand and trends, the author has accumulated a lot of theoretical knowledge and practical experience. Rational, using of some new technology for a mobile entertainment games design, and described the development of key technologies required for mobile game an analysis and design of the game, and to achieve a complete game development. Light of the specific mobile game project - "Battle City", detailed the development of a mobile game based on the J2ME platform, the basic steps and the various key elements, focusing on how to use object-oriented thinking on the role of mobile phones in the abstract and Game Animation package, the source code with specific instructions.
Design and development of the mobile game based on the J2ME technology
NASA Astrophysics Data System (ADS)
He, JunHua
2012-01-01
With the continuous improvement of mobile performance, mobile entertainment applications market trend has been increasingly clear, mobile entertainment applications will be after the PC entertainment applications is another important business growth. Through the full analysis of the current mobile entertainment applications market demand and trends, the author has accumulated a lot of theoretical knowledge and practical experience. Rational, using of some new technology for a mobile entertainment games design, and described the development of key technologies required for mobile game an analysis and design of the game, and to achieve a complete game development. Light of the specific mobile game project - "Battle City", detailed the development of a mobile game based on the J2ME platform, the basic steps and the various key elements, focusing on how to use object-oriented thinking on the role of mobile phones in the abstract and Game Animation package, the source code with specific instructions.
"Why Should I Tell My Business?": An Emerging Theory of Coping and Disclosure in Teens.
DeFrino, Daniela T; Marko-Holguin, Monika; Cordel, Stephanie; Anker, Lauren; Bansa, Melishia; Van Voorhees, Benjamin
2016-01-01
Disclosing predepression feelings of sadness is difficult for teens. Primary care providers are a potential avenue for teens to disclose these feelings and a bridge to mental health care before becoming more seriously ill. To explore how to more effectively recruit teens into a primary care-based, online depression prevention study, we held 5 focus groups with African American and Latino teens (n = 43) from a large Midwestern city. We conducted constant comparative analysis of the data and a theoretical conceptualization of coping and disclosure emerged. Our analysis revealed an internal coping continuum in reaction to sadness and pivotal elements of trust and judgment that either lead teens to disclose or not disclose these feelings. The teens' perspectives show the necessary characteristics of a relationship and comfortable community and virtual settings that can best allow for teens to take the step of disclosing to receive mental health care services.
Villafranca, Alexander; Hamlin, Colin; Rodebaugh, Thomas L; Robinson, Sandra; Jacobsohn, Eric
2017-09-10
Disruptive intraoperative behavior has detrimental effects to clinicians, institutions, and patients. How clinicians respond to this behavior can either exacerbate or attenuate its effects. Previous investigations of disruptive behavior have used survey scales with significant limitations. The study objective was to develop appropriate scales to measure exposure and responses to disruptive behavior. We obtained ethics approval. The scales were developed in a sequence of steps. They were pretested using expert reviews, computational linguistic analysis, and cognitive interviews. The scales were then piloted on Canadian operating room clinicians. Factor analysis was applied to half of the data set for question reduction and grouping. Item response analysis and theoretical reviews ensured that important questions were not eliminated. Internal consistency was evaluated using Cronbach α. Model fit was examined on the second half of the data set using confirmatory factor analysis. Content validity of the final scales was re-evaluated. Consistency between observed relationships and theoretical predictions was assessed. Temporal stability was evaluated on a subsample of 38 respondents. A total of 1433 and 746 clinicians completed the exposure and response scales, respectively. Content validity indices were excellent (exposure = 0.96, responses = 1.0). Internal consistency was good (exposure = 0.93, responses = 0.87). Correlations between the exposure scale and secondary measures were consistent with expectations based on theory. Temporal stability was acceptable (exposure = 0.77, responses = 0.73). We have developed scales measuring exposure and responses to disruptive behavior. They generate valid and reliable scores when surveying operating room clinicians, and they overcome the limitations of previous tools. These survey scales are freely available.
Spectroscopic, thermal analysis and DFT computational studies of salen-type Schiff base complexes.
Ebrahimi, Hossein Pasha; Hadi, Jabbar S; Abdulnabi, Zuhair A; Bolandnazar, Zeinab
2014-01-03
A new series of metal(II) complexes of Co(II), Ni(II), Cu(II), Zn(II), and Pb(II) have been synthesized from a salen-type Schiff base ligand derived from o-vanillin and 4-methyl-1,2-phenylenediamine and characterized by elemental analysis, spectral (IR, UV-Vis, (1)H NMR, (13)C NMR and EI-mass), molar conductance measurements and thermal analysis techniques. Coats-Redfern method has been utilized to calculate the kinetic and thermodynamic parameters of the metal complexes. The molecular geometry, Mulliken atomic charges of the studied compounds were investigated theoretically by performing density functional theory (DFT) to access reliable results to the experimental values. The theoretical (13)C chemical shift results of the studied compounds have been calculated at the B3LYP, PBEPBE and PW91PW91 methods and standard 6-311+G(d,p) basis set starting from optimized geometry. The comparison of the results indicates that B3LYP/6-311+G(d,p) yields good agreement with the observed chemical shifts. The measured low molar conductance values in DMF indicate that the metal complexes are non-electrolytes. The spectral and thermal analysis reveals that all complexes have octahedral geometry except Cu(II) complex which can attain the square planner arrangement. The presence of lattice and coordinated water molecules are indicated by thermograms of the complexes. The thermogravimetric (TG/DTG) analyses confirm high stability for all complexes followed by thermal decomposition in different steps. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
Design, analysis and testing of a new piezoelectric tool actuator for elliptical vibration turning
NASA Astrophysics Data System (ADS)
Lin, Jieqiong; Han, Jinguo; Lu, Mingming; Yu, Baojun; Gu, Yan
2017-08-01
A new piezoelectric tool actuator (PETA) for elliptical vibration turning has been developed based on a hybrid flexure hinge connection. Two double parallel four-bar linkage mechanisms and two right circular flexure hinges were chosen to guide the motion. The two input displacement directional stiffness were modeled according to the principle of virtual work modeling method and the kinematic analysis was conducted theoretically. Finite element analysis was used to carry out static and dynamic analyses. To evaluate the performance of the developed PETA, off-line experimental tests were carried out to investigate the step responses, motion strokes, resolutions, parasitic motions, and natural frequencies of the PETA along the two input directions. The relationship between input displacement and output displacement, as well as the tool tip’s elliptical trajectory in different phase shifts was analyzed. By using the developed PETA mechanism, micro-dimple patterns were generated as the preliminary application to demonstrate the feasibility and efficiency of PETA for elliptical vibration turning.
Protein analysis by time-resolved measurements with an electro-switchable DNA chip
Langer, Andreas; Hampel, Paul A.; Kaiser, Wolfgang; Knezevic, Jelena; Welte, Thomas; Villa, Valentina; Maruyama, Makiko; Svejda, Matej; Jähner, Simone; Fischer, Frank; Strasser, Ralf; Rant, Ulrich
2013-01-01
Measurements in stationary or mobile phases are fundamental principles in protein analysis. Although the immobilization of molecules on solid supports allows for the parallel analysis of interactions, properties like size or shape are usually inferred from the molecular mobility under the influence of external forces. However, as these principles are mutually exclusive, a comprehensive characterization of proteins usually involves a multi-step workflow. Here we show how these measurement modalities can be reconciled by tethering proteins to a surface via dynamically actuated nanolevers. Short DNA strands, which are switched by alternating electric fields, are employed as capture probes to bind target proteins. By swaying the proteins over nanometre amplitudes and comparing their motional dynamics to a theoretical model, the protein diameter can be quantified with Angström accuracy. Alterations in the tertiary protein structure (folding) and conformational changes are readily detected, and even post-translational modifications are revealed by time-resolved molecular dynamics measurements. PMID:23839273
Xia, Dunzhu; Kong, Lun; Gao, Haiyu
2015-07-13
We present in this paper a novel fully decoupled silicon micromachined tri-axis linear vibratory gyroscope. The proposed gyroscope structure is highly symmetrical and can be limited to an area of about 8.5 mm × 8.5 mm. It can differentially detect three axes' angular velocities at the same time. By elaborately arranging different beams, anchors and sensing frames, the drive and sense modes are fully decoupled from each other. Moreover, the quadrature error correction and frequency tuning functions are taken into consideration in the structure design for all the sense modes. Since there exists an unwanted in-plane rotational mode, theoretical analysis is implemented to eliminate it. To accelerate the mode matching process, the particle swam optimization (PSO) algorithm is adopted and a frequency split of 149 Hz is first achieved by this method. Then, after two steps of manual adjustment of the springs' dimensions, the frequency gap is further decreased to 3 Hz. With the help of the finite element method (FEM) software ANSYS, the natural frequencies of drive, yaw, and pitch/roll modes are found to be 14,017 Hz, 14,018 Hz and 14,020 Hz, respectively. The cross-axis effect and scale factor of each mode are also simulated. All the simulation results are in good accordance with the theoretical analysis, which means the design is effective and worthy of further investigation on the integration of tri-axis accelerometers on the same single chip to form an inertial measurement unit.
Simulation of the MELiSSA closed loop system as a tool to define its integration strategy
NASA Astrophysics Data System (ADS)
Poughon, Laurent; Farges, Berangere; Dussap, Claude-Gilles; Godia, Francesc; Lasseur, Christophe
Inspired from a terrestrial ecosystem, MELiSSA (Micro Ecological Life Support System Alternative) is a project of closed life support system future long-term manned missions (Moon and Mars bases). Started on ESA in 1989, this 5 compartments concept has evolved following a mechanistic engineering approach for acquiring both theoretical and technical knowledge. In its current state of development the project can now start to demonstrate the MELiSSA loop concept at a pilot scale. Thus an integration strategy for a MELiSSA Pilot Plant (MPP) was defined, describing the different phases for tests and connections between compartments. The integration steps should be started in 2008 and be completed with a complete operational loop in 2015, which final objective is to achieve a closed liquid and gas loop with 100 Although the integration logic could start with the most advanced processes in terms of knowledge and hardware development, this logic needs to be completed by high politic of simulation. Thanks to this simulation exercise, the effective demonstrations of each independent process and its progressive coupling with others will be performed in operational conditions as close as possible to the final configuration. The theoretical approach described in this paper is based on mass balance models of each of the MELiSSA biological compartments which are used to simulate each integration step and the complete MPP loop itself. These simulations will help to identify criticalities of each integration steps and to check the consistencies between objectives, flows, recycling efficiencies and sizing of the pilot reactors. A MPP scenario compatible with the current knowledge of the operation of the pilot reactors was investigated and the theoretical performances of the system compared to the objectives of the MPP. From this scenario the most important milestone steps in the integration are highlighted and their behaviour can be simulated.
The flexible brain. On mind and brain, neural darwinism and psychiatry.
den Boer, J A
1997-09-01
A theoretical introduction is given in which several theoretical viewpoints concerning the mind-brain problem are discussed. During the last decade philosophers like Searle, Dennett and the Churchlands have taken a more or less pure materialistic position in explaining mental phenomena. Investigators in biological psychiatry have hardly ever taken a clear position in this discussion, whereas we believe it is important that the conclusions drawn from biological research are embedded in a theoretical framework related to the mind-brain problem. In this article the thesis is defended that the theory of neural darwinism represents a major step forward and may bridge previous distinctions between biological, clinical and social psychiatry.
NASA Astrophysics Data System (ADS)
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...
2017-04-25
This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less
Ye, Xin; Jiang, Xiaodong; Huang, Jin; Geng, Feng; Sun, Laixi; Zu, Xiaotao; Wu, Weidong; Zheng, Wanguo
2015-01-01
Fused silica subwavelength structures (SWSs) with an average period of ~100 nm were fabricated using an efficient approach based on one-step self-masking reactive ion etching. The subwavelength structures exhibited excellent broadband antireflection properties from the ultraviolet to near-infrared wavelength range. These properties are attributable to the graded refractive index for the transition from air to the fused silica substrate that is produced by the ideal nanocone subwavelength structures. The transmittance in the 400–700 nm range increased from approximately 93% for the polished fused silica to greater than 99% for the subwavelength structure layer on fused silica. Achieving broadband antireflection in the visible and near-infrared wavelength range by appropriate matching of the SWS heights on the front and back sides of the fused silica is a novel strategy. The measured antireflection properties are consistent with the results of theoretical analysis using a finite-difference time-domain (FDTD) method. This method is also applicable to diffraction grating fabrication. Moreover, the surface of the subwavelength structures exhibits significant superhydrophilic properties. PMID:26268896
Incorporation of membrane potential into theoretical analysis of electrogenic ion pumps.
Reynolds, J A; Johnson, E A; Tanford, C
1985-01-01
The transport rate of an electrogenic ion pump, and therefore also the current generated by the pump, depends on the potential difference (delta psi) between the two sides of the membrane. This dependence arises from at least three sources: (i) charges carried across the membrane by the transported ions; (ii) protein charges in the ion binding sites that alternate between exposure to (and therefore electrical contact with) the two sides of the membrane; (iii) protein charges or dipoles that move within the domain of the membrane as a result of conformational changes linked to the transport cycle. Quantitative prediction of these separate effects requires presently unavailable molecular information, so that there is great freedom in assigning voltage dependence to individual steps of a transport cycle when one attempts to make theoretical calculations of physiological behavior for an ion pump for which biochemical data (mechanism, rate constants, etc.) are already established. The need to make kinetic behavior consistent with thermodynamic laws, however, limits this freedom, and in most cases two points on a curve of rate versus delta psi will be fixed points independent of how voltage dependence is assigned. Theoretical discussion of these principles is illustrated by reference to ATP-driven Na,K pumps. Physiological data for this system suggest that all three of the possible mechanisms for generating voltage dependence do in fact make significant contributions. PMID:2413447
Detection Identification and Quantification of Keto-Hydroperoxides in Low-Temperature Oxidation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Nils; Moshammer, Kai; Jasper, Ahren W.
2017-07-01
Keto-hydroperoxides are reactive partially oxidized intermediates that play a central role in chain-branching reactions during the low-temperature oxidation of hydrocarbons. In this Perspective, we outline how these short lived species can be detected, identified, and quantified using integrated experimental and theoretical approaches. The procedures are based on direct molecular-beam sampling from reactive environments, followed by mass spectrometry with single-photon ionization, identification of fragmentation patterns, and theoretical calculations of ionization thresholds, fragment appearance energies, and photoionization cross sections. Using the oxidation of neo-pentane and tetrahydrofuran as examples, the individual steps of the experimental approaches are described in depth together with amore » detailed description of the theoretical efforts. For neo-pentane, the experimental data are consistent with the calculated ionization and fragment appearance energies of the keto-hydroperoxide, thus adding confidence to the analysis routines and the employed levels of theory. For tetrahydrofuran, multiple keto-hydroperoxide isomers are possible due to the presence of nonequivalent O 2 addition sites. Despite this additional complexity, the experimental data allow for the identification of two to four keto-hydroperoxides. Mole fraction profiles of the keto-hydroperoxides, which are quantified using calculated photoionization cross sections, are provided together with estimated uncertainties as function of the temperature of the reactive mixture and can serve as validation targets for chemically detailed mechanisms.« less
Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D
2016-08-01
Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.
Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.
Abu-Alqumsan, Mohammad; Peer, Angelika
2016-06-01
Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.
Finite Volume Method for Pricing European Call Option with Regime-switching Volatility
NASA Astrophysics Data System (ADS)
Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM
2018-03-01
In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.
NASA Astrophysics Data System (ADS)
Cheng, Xin-Bing; Liu, Jin-Liang; Zhang, Hong-Bo; Feng, Jia-Huai; Qian, Bao-Liang
2010-07-01
The Blumlein pulse forming line (BPFL) consisting of an inner coaxial pulse forming line (PFL) and an outer coaxial PFL is widely used in the field of pulsed power, especially for intense electron-beam accelerators (IEBA). The output voltage waveform determines the quality and characteristics of the output beam current of the IEBA. Comparing with the conventional BPFL, an IEBA based on a helical type BPFL can increase the duration of the output voltage in the same geometrical volume. However, for the helical type BPFL, the voltage waveform on a matched load may be distorted which influences the electron-beam quality. In this paper, an IEBA based on helical type BPFL is studied theoretically. Based on telegrapher equations of the BPFL, a formula for the output voltage of IEBA is obtained when the transition section is taken into account, where the transition section is between the middle cylinder of BPFL and the load. From the theoretical analysis, it is found that the wave impedance and transit time of the transition section influence considerably the main pulse voltage waveform at the load, a step is formed in front of the main pulse, and a sharp spike is also formed at the end of the main pulse. In order to get a well-shaped square waveform at the load and to improve the electron-beam quality of such an accelerator, the wave impedance of the transition section should be equal to that of the inner PFL of helical type BPFL and the transit time of the transition section should be designed as short as possible. Experiments performed on an IEBA with the helical type BPFL show reasonable agreement with theoretical analysis.
Okumura, Megumi J; Saunders, Mara; Rehm, Roberta S
2015-01-01
Youth and young adults with special healthcare needs (YASHCN) experience challenges during transition from pediatric to adult care. Prior studies have not examined how community and healthcare resources can work together to assist YASHCN in transitioning from child-focused care and services to adult-oriented providers. The aim of this study was to develop a theoretical understanding of how family, healthcare providers and community supports can assist YASHCN during the transition from pediatric to adult healthcare and services. We conducted 41 semi-structured interviews with YASHCN aged 16-25, their family members and healthcare and community providers. We focused our interviews on support mechanisms, both within the traditional healthcare system, and those available in the community. Using grounded theory methods, we performed a multi-step analysis process. The theoretical code "Transition Advocacy" was developed from the data. This theoretical perspective arose from three major categories, which were developed in the analysis: "Fighting for healthcare", "Obtaining resources", and "Getting ready to transition". Transition Advocacy consists of the presence of, or need for, a healthcare "advocate" who did or can assist the YASHCN with the healthcare transition, particularly to navigate complex health or community services. The "advocate" role was performed by family members, healthcare or agency professionals, or sometimes the YASHCN themselves. If advocates were identified, youth were more likely to obtain needed services. Parents, health providers, and community agencies are potentially well-poised to assist transitioning YASHCN. Efforts to encourage development of strong advocacy skills will facilitate better transitions for YASHCN. Copyright © 2015 Elsevier Inc. All rights reserved.
Okumura, Megumi; Saunders, Mara; Rehm, Roberta S.
2015-01-01
Background Youth and young adults with special healthcare needs (YASHCN) experience challenges during transition from pediatric to adult care. Prior studies have not examined how community and healthcare resources can work together to assist YASHCN in transitioning from child-focused care and services to adult-oriented providers. Objective To develop a theoretical understanding of how family, healthcare providers and community supports can assist YASHCN during the transition from pediatric to adult healthcare and services. Design/Methods We conducted 41 semi-structured interviews with YASHCN aged 16-25, their family members and healthcare and community providers. We focused our interviews on support mechanisms, both within the traditional healthcare system, and those available in the community. Using grounded theory methods, we performed a multi-step analysis process. Results The theoretical code “Transition Advocacy” was developed from the data. This theoretical perspective arose from three major categories, which were developed in the analysis: “Fighting for healthcare”, “Obtaining resources”, and “Getting ready to transition”. Transition Advocacy consists of the presence of, or need for, a healthcare ”advocate”’ who did or can assist the YASHCN with the healthcare transition, particularly to navigate complex health or community services. The ”advocate” role was performed by family members, healthcare or agency professionals, or sometimes the YASHCN themselves. If advocates were identified, youth were more likely to obtain needed services. Conclusions Parents, health providers, and community agencies are potentially well-poised to assist transitioning YASHCN. Efforts to encourage development of strong advocacy skills will facilitate better transitions for YASHCN. PMID:26228309
Escobedo-González, René; Méndez-Albores, Abraham; Villarreal-Barajas, Tania; Aceves-Hernández, Juan Manuel; Miranda-Ruvalcaba, René; Nicolás-Vázquez, Inés
2016-07-21
Theoretical studies of 8-chloro-9-hydroxy-aflatoxin B₁ (2) were carried out by Density Functional Theory (DFT). This molecule is the reaction product of the treatment of aflatoxin B₁ (1) with hypochlorous acid, from neutral electrolyzed water. Determination of the structural, electronic and spectroscopic properties of the reaction product allowed its theoretical characterization. In order to elucidate the formation process of 2, two reaction pathways were evaluated-the first one considering only ionic species (Cl⁺ and OH(-)) and the second one taking into account the entire hypochlorous acid molecule (HOCl). Both pathways were studied theoretically in gas and solution phases. In the first suggested pathway, the reaction involves the addition of chlorenium ion to 1 forming a non-classic carbocation assisted by anchimeric effect of the nearest aromatic system, and then a nucleophilic attack to the intermediate by the hydroxide ion. In the second studied pathway, as a first step, the attack of the double bond from the furanic moiety of 1 to the hypochlorous acid is considered, accomplishing the same non-classical carbocation, and again in the second step, a nucleophilic attack by the hydroxide ion. In order to validate both reaction pathways, the atomic charges, the highest occupied molecular orbital and the lowest unoccupied molecular orbital were obtained for both substrate and product. The corresponding data imply that the C₉ atom is the more suitable site of the substrate to interact with the hydroxide ion. It was demonstrated by theoretical calculations that a vicinal and anti chlorohydrin is produced in the terminal furan ring. Data of the studied compound indicate an important reduction in the cytotoxic and genotoxic potential of the target molecule, as demonstrated previously by our research group using different in vitro assays.
NASA Astrophysics Data System (ADS)
Suárez Araujo, Carmen Paz; Barahona da Fonseca, Isabel; Barahona da Fonseca, José; Simões da Fonseca, J.
2004-08-01
A theoretical approach that aims to the identification of information processing that may be responsible for emotional dimensions of subjective experience is studied as an initial step in the construction of a neural net model of affective dimensions of psychological experiences. In this paper it is suggested that a way of orientated recombination of attributes can be present not only in the perceptive processing but also in cognitive ones. We will present an analysis of the most important emotion theories, we show their neural organization and we propose the neural computation approach as an appropriate framework for generating knowledge about the neural base of emotional experience. Finally, in this study we present a scheme corresponding to framework to design a computational neural multi-system for Emotion (CONEMSE).
Phase control of attosecond pulses in a train
NASA Astrophysics Data System (ADS)
Guo, Chen; Harth, Anne; Carlström, Stefanos; Cheng, Yu-Chen; Mikaelsson, Sara; Mårsell, Erik; Heyl, Christoph; Miranda, Miguel; Gisselbrecht, Mathieu; Gaarde, Mette B.; Schafer, Kenneth J.; Mikkelsen, Anders; Mauritsson, Johan; Arnold, Cord L.; L'Huillier, Anne
2018-02-01
Ultrafast processes in matter can be captured and even controlled by using sequences of few-cycle optical pulses, which need to be well characterized, both in amplitude and phase. The same degree of control has not yet been achieved for few-cycle extreme ultraviolet pulses generated by high-order harmonic generation (HHG) in gases, with duration in the attosecond range. Here, we show that by varying the spectral phase and carrier-envelope phase (CEP) of a high-repetition rate laser, using dispersion in glass, we achieve a high degree of control of the relative phase and CEP between consecutive attosecond pulses. The experimental results are supported by a detailed theoretical analysis based upon the semi-classical three-step model for HHG.
Research of flaw image collecting and processing technology based on multi-baseline stereo imaging
NASA Astrophysics Data System (ADS)
Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan
2008-03-01
Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.
NASA Technical Reports Server (NTRS)
Horton, Kent; Huffman, Mitch; Eppic, Brian; White, Harrison
2005-01-01
Path Loss Measurements were obtained on three (3) GPS equipped 757 aircraft. Systems measured were Marker Beacon, LOC, VOR, VHF (3), Glide Slope, ATC (2), DME (2), TCAS, and GPS. This data will provide the basis for assessing the EMI (Electromagnetic Interference) safety margins of comm/nav (communication and navigation) systems to portable electronic device emissions. These Portable Electronic Devices (PEDs) include all devices operated in or around the aircraft by crews, passengers, servicing personnel, as well as the general public in the airport terminals. EMI assessment capability is an important step in determining if one system-wide PED EMI policy is appropriate. This data may also be used comparatively with theoretical analysis and computer modeling data sponsored by NASA Langley Research Center and others.
[10 years of feminist intervention in Quebec: statement and perspectives].
Bourgon, M; Corbeil, C
1990-05-01
The authors start off by reviewing the origins and principal characteristics of feminist therapy as it appeared in the United States at the end of the 1960s. Following this step are analyzed the conditions for the emergence of feminist intervention in Québec and its specificity, terms commonly used by Québec practicians when describing their work among women. Emphasis is placed on the analysis of feminist intervention in institutional environments due to its remarkable development over the last few years. The article concisely presents the two main theoretical approaches that inspire the intervention, namely the socio-behavioral approach and the awareness approach. Following a brief overview of feminist intervention in Québec, the authors raise questions about its future.
Coupled wave model for large magnet coils
NASA Technical Reports Server (NTRS)
Gabriel, G. J.
1980-01-01
A wave coupled model based on field theory is evolved for analysis of fast electromagnetic transients on superconducting coils. It is expected to play a useful role in the design of protection methods against damage due to high voltages or any adverse effects that might arise from unintentional transients. The significant parameters of the coil are identified to be the turn to turn wave coupling coefficients and the travel time of an electromagnetic disturbance around a single turn. Unlike circuit theoretic inductor, the coil response evolves in discrete steps having durations equal to this travel time. It is during such intervals that high voltages are likely to occur. The model also bridges the gap between the low and high ends of the frequency spectrum.
NASA Astrophysics Data System (ADS)
Voelz, David; Wijerathna, Erandi; Xiao, Xifeng; Muschinski, Andreas
2017-09-01
The analysis of optical propagation through both deterministic and stochastic refractive-index fields may be substantially simplified if diffraction effects can be neglected. With regard to simplification, it is known that certain geometricaloptics predictions often agree well with field observations but it is not always clear why this is so. Here, a new investigation of this issue is presented involving wave optics and geometrical (ray) optics computer simulations of a beam of visible light propagating through fully turbulent, homogeneous and isotropic refractive-index fields. We compare the computationally simulated, aperture-averaged angle-of-arrival variances (for aperture diameters ranging from 0.5 to 13 Fresnel lengths) with theoretical predictions based on the Rytov theory.
Emphasizing language and visualization in teaching linear algebra
NASA Astrophysics Data System (ADS)
Hannah, John; Stewart, Sepideh; Thomas, Mike
2013-06-01
Linear algebra with its rich theoretical nature is a first step towards advanced mathematical thinking for many undergraduate students. In this paper, we consider the teaching approach of an experienced mathematician as he attempts to engage his students with the key ideas embedded in a second-year course in linear algebra. We describe his approach in both lectures and tutorials, and how he employed visualization and an emphasis on language to encourage conceptual thinking. We use Tall's framework of three worlds of mathematical thinking to reflect on the effect of these activities in students' learning. An analysis of students' attitudes to the course and their test and examination results help to answer questions about the value of such an approach, suggesting ways forward in teaching linear algebra.
Sensing a heart infarction marker with surface plasmon resonance spectroscopy
NASA Astrophysics Data System (ADS)
Kunz, Ulrich; Katerkamp, Andreas; Renneberg, Reinhard; Spener, Friedrich; Cammann, Karl
1995-02-01
In this study a direct immunosensor for heart-type fatty acid binding protein (FABP) based on surface plasmon resonance spectroscopy (SPRS) is presented. FABP can be used as a heart infarction marker in clinical diagnostics. The development of a simple and cheap direct optical sensor device is reported in this paper as well as immobilization procedures and optimization of the measuring conditions. The correct working of the SPRS device is controlled by comparing the signals with theoretical calculated values. Two different immunoassay techniques were optimized for a sensitive FABP-analysis. The competitive immunoassay was superior to the sandwich configuration as it had a lower detection limit (100 ng/ml), needed less antibodies and could be carried out in one step.
Dynamic analysis and control of lightweight manipulators with flexible parallel link mechanisms
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1991-01-01
The flexible parallel link mechanism is designed for increased rigidity to sustain the buckling when it carries a heavy payload. Compared to a one link flexible manipulator, a two link flexible manipulator, especially the flexible parallel mechanism, has more complicated characteristics in dynamics and control. The objective of this research is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model. The step response of the analytical model and the TREETOPS model match each other well. The nonlinear dynamics is studied using a sinusoidal excitation. The actuator dynamic effect on a flexible robot was investigated. The effects are explained by the root loci and the Bode plot theoretically and experimentally. For the base performance for the advanced control scheme, a simple decoupled feedback scheme is applied.
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Harmonic analysis of traction power supply system based on wavelet decomposition
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.
Analysis of Implicit Uncertain Systems. Part 1: Theoretical Framework
1994-12-07
Analysis of Implicit Uncertain Systems Part I: Theoretical Framework Fernando Paganini * John Doyle 1 December 7, 1994 Abst rac t This paper...Analysis of Implicit Uncertain Systems Part I: Theoretical Framework 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...model and a number of constraints relevant to the analysis problem under consideration. In Part I of this paper we propose a theoretical framework which
Matsumoto, Atsushi; Tobias, Irwin; Olson, Wilma K
2005-01-01
Fine structural and energetic details embedded in the DNA base sequence, such as intrinsic curvature, are important to the packaging and processing of the genetic material. Here we investigate the internal dynamics of a 200 bp closed circular molecule with natural curvature using a newly developed normal-mode treatment of DNA in terms of neighboring base-pair "step" parameters. The intrinsic curvature of the DNA is described by a 10 bp repeating pattern of bending distortions at successive base-pair steps. We vary the degree of intrinsic curvature and the superhelical stress on the molecule and consider the normal-mode fluctuations of both the circle and the stable figure-8 configuration under conditions where the energies of the two states are similar. To extract the properties due solely to curvature, we ignore other important features of the double helix, such as the extensibility of the chain, the anisotropy of local bending, and the coupling of step parameters. We compare the computed normal modes of the curved DNA model with the corresponding dynamical features of a covalently closed duplex of the same chain length constructed from naturally straight DNA and with the theoretically predicted dynamical properties of a naturally circular, inextensible elastic rod, i.e., an O-ring. The cyclic molecules with intrinsic curvature are found to be more deformable under superhelical stress than rings formed from naturally straight DNA. As superhelical stress is accumulated in the DNA, the frequency, i.e., energy, of the dominant bending mode decreases in value, and if the imposed stress is sufficiently large, a global configurational rearrangement of the circle to the figure-8 form takes place. We combine energy minimization with normal-mode calculations of the two states to decipher the configurational pathway between the two states. We also describe and make use of a general analytical treatment of the thermal fluctuations of an elastic rod to characterize the motions of the minicircle as a whole from knowledge of the full set of normal modes. The remarkable agreement between computed and theoretically predicted values of the average deviation and dispersion of the writhe of the circular configuration adds to the reliability in the computational approach. Application of the new formalism to the computed modes of the figure-8 provides insights into macromolecular motions which are beyond the scope of current theoretical treatments.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.
Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing
2017-08-23
Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.
Donelan, J Maxwell; Kram, Rodger; Kuo, Arthur D
2002-12-01
In the single stance phase of walking, center of mass motion resembles that of an inverted pendulum. Theoretically, mechanical work is not necessary for producing the pendular motion, but work is needed to redirect the center of mass velocity from one pendular arc to the next during the transition between steps. A collision model predicts a rate of negative work proportional to the fourth power of step length. Positive work is required to restore the energy lost, potentially exacting a proportional metabolic cost. We tested these predictions with humans (N=9) walking over a range of step lengths (0.4-1.1 m) while keeping step frequency fixed at 1.8 Hz. We measured individual limb external mechanical work using force plates, and metabolic rate using indirect calorimetry. As predicted, average negative and positive external mechanical work rates increased with the fourth power of step length (from 1 W to 38 W; r(2)=0.96). Metabolic rate also increased with the fourth power of step length (from 7 W to 379 W; r(2)=0.95), and linearly with mechanical work rate. Mechanical work for step-to-step transitions, rather than pendular motion itself, appears to be a major determinant of the metabolic cost of walking.
Four Step Model for Experiential Counseling.
ERIC Educational Resources Information Center
Long, Vonda; Scherer, David
As experiential counseling gains wider acceptance, it becomes more important to operate from a structural framework promoting effective and ethical practices. This paper outlines a four-part model of experiential counseling: theoretical foundations, experiential activity and personnel, processing and communication skills, and prerequisites for…
Gedeon, Patrick C; Thomas, James R; Madura, Jeffry D
2015-01-01
Molecular dynamics simulation provides a powerful and accurate method to model protein conformational change, yet timescale limitations often prevent direct assessment of the kinetic properties of interest. A large number of molecular dynamic steps are necessary for rare events to occur, which allow a system to overcome energy barriers and conformationally transition from one potential energy minimum to another. For many proteins, the energy landscape is further complicated by a multitude of potential energy wells, each separated by high free-energy barriers and each potentially representative of a functionally important protein conformation. To overcome these obstacles, accelerated molecular dynamics utilizes a robust bias potential function to simulate the transition between different potential energy minima. This straightforward approach more efficiently samples conformational space in comparison to classical molecular dynamics simulation, does not require advanced knowledge of the potential energy landscape and converges to the proper canonical distribution. Here, we review the theory behind accelerated molecular dynamics and discuss the approach in the context of modeling protein conformational change. As a practical example, we provide a detailed, step-by-step explanation of how to perform an accelerated molecular dynamics simulation using a model neurotransmitter transporter embedded in a lipid cell membrane. Changes in protein conformation of relevance to the substrate transport cycle are then examined using principle component analysis.
Nakayama, Hiroshi; Akiyama, Misaki; Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki
2009-04-01
We present here a method to correlate tandem mass spectra of sample RNA nucleolytic fragments with an RNA nucleotide sequence in a DNA/RNA sequence database, thereby allowing tandem mass spectrometry (MS/MS)-based identification of RNA in biological samples. Ariadne, a unique web-based database search engine, identifies RNA by two probability-based evaluation steps of MS/MS data. In the first step, the software evaluates the matches between the masses of product ions generated by MS/MS of an RNase digest of sample RNA and those calculated from a candidate nucleotide sequence in a DNA/RNA sequence database, which then predicts the nucleotide sequences of these RNase fragments. In the second step, the candidate sequences are mapped for all RNA entries in the database, and each entry is scored for a function of occurrences of the candidate sequences to identify a particular RNA. Ariadne can also predict post-transcriptional modifications of RNA, such as methylation of nucleotide bases and/or ribose, by estimating mass shifts from the theoretical mass values. The method was validated with MS/MS data of RNase T1 digests of in vitro transcripts. It was applied successfully to identify an unknown RNA component in a tRNA mixture and to analyze post-transcriptional modification in yeast tRNA(Phe-1).
NASA Astrophysics Data System (ADS)
Kawai, Kotaro; Sakamoto, Moritsugu; Noda, Kohei; Sasaki, Tomoyuki; Kawatsuki, Nobuhiro; Ono, Hiroshi
2017-01-01
A tunable dichroic polarization beam splitter (tunable DPBS) simultaneously performs the follow functions: 1. Separation of a polarized incident beam into multiple pairs of orthogonally polarized beams; 2. Separation of the propagation direction of two wavelength incident beams after passing through the tunable DPBS; and 3. Control of both advanced polarization and wavelength separation capabilities by varying the temperature of the tunable DPBS. This novel complex optical property is realized by diffraction phenomena using a designed three-dimensional periodic structure of aligned liquid crystals in the tunable DPBS, which was fabricated quickly with precision in a one-step photoalignment using four-beam polarization interferometry. In experiments, we demonstrated that these diffraction properties are obtained by entering polarized beams of wavelengths 532 nm and 633 nm onto the tunable DPBS. These diffraction properties are described using the Jones calculus in a polarization propagation analysis. Of significance is that the aligned liquid crystal structure needed to obtain these diffraction properties was proposed based on a theoretical analysis, and these properties were then demonstrated experimentally. The tunable DPBS can perform several functions of a number of optical elements such as wave plates, polarization beam splitter, dichroic beam splitter, and tunable wavelength filter. Therefore, the tunable DPBS can contribute to greater miniaturization, sophistication, and cost reduction of optical systems used widely in applications, such as optical measurements, communications, and information processing.
Desveaux, Laura; Gagliardi, Anna R
2018-06-04
Post-market surveillance of medical devices is reliant on physician reporting of adverse medical device events (AMDEs). Few studies have examined factors that influence whether and how physicians report AMDEs, an essential step in the development of behaviour change interventions. This study was a secondary analysis comparing application of the Theoretical Domains Framework (TDF) and the Tailored Implementation for Chronic Diseases (TICD) framework to identify potential behaviour change interventions that correspond to determinants of AMDE reporting. A previous study involving qualitative interviews with Canadian physicians that implant medical devices identified themes reflecting AMDE reporting determinants. In this secondary analysis, themes that emerged from the primary analysis were independently mapped to the TDF and TICD. Determinants and corresponding intervention options arising from both frameworks (and both mappers) were compared. Both theoretical frameworks were useful for identifying interventions corresponding to behavioural determinants of AMDE reporting. Information or education strategies that provide evidence about AMDEs, and audit and feedback of AMDE data were identified as interventions to target the theme of physician beliefs; improving information systems, and reminder cues, prompts and awards were identified as interventions to address determinants arising from the organization or systems themes; and modifying financial/non-financial incentives and sharing data on outcomes associated with AMDEs were identified as interventions to target device market themes. Numerous operational challenges were encountered in the application of both frameworks including a lack of clarity about how directly relevant to themes the domains/determinants should be, how many domains/determinants to select, if and how to resolve discrepancies across multiple mappers, and how to choose interventions from among the large number associated with selected domains/determinants. Given discrepancies in mapping themes to determinants/domains and the resulting interventions offered by the two frameworks, uncertainty remains about how to choose interventions that best match behavioural determinants in a given context. Further research is needed to provide more nuanced guidance on the application of TDF and TICD for a broader audience, which is likely to increase the utility and uptake of these frameworks in practice.
Mishori, Ranit; Singh, Lisa Oberoi; Levy, Brendan; Newport, Calvin
2014-04-14
Twitter is becoming an important tool in medicine, but there is little information on Twitter metrics. In order to recommend best practices for information dissemination and diffusion, it is important to first study and analyze the networks. This study describes the characteristics of four medical networks, analyzes their theoretical dissemination potential, their actual dissemination, and the propagation and distribution of tweets. Open Twitter data was used to characterize four networks: the American Medical Association (AMA), the American Academy of Family Physicians (AAFP), the American Academy of Pediatrics (AAP), and the American College of Physicians (ACP). Data were collected between July 2012 and September 2012. Visualization was used to understand the follower overlap between the groups. Actual flow of the tweets for each group was assessed. Tweets were examined using Topsy, a Twitter data aggregator. The theoretical information dissemination potential for the groups is large. A collective community is emerging, where large percentages of individuals are following more than one of the groups. The overlap across groups is small, indicating a limited amount of community cohesion and cross-fertilization. The AMA followers' network is not as active as the other networks. The AMA posted the largest number of tweets while the AAP posted the fewest. The number of retweets for each organization was low indicating dissemination that is far below its potential. To increase the dissemination potential, medical groups should develop a more cohesive community of shared followers. Tweet content must be engaging to provide a hook for retweeting and reaching potential audience. Next steps call for content analysis, assessment of the behavior and actions of the messengers and the recipients, and a larger-scale study that considers other medical groups using Twitter.
Lake, Amelia J; Browne, Jessica L; Abraham, Charles; Tumino, Dee; Hines, Carolyn; Rees, Gwyneth; Speight, Jane
2018-05-31
Young adults (18-39 years) with type 2 diabetes are at risk of early development and rapid progression of diabetic retinopathy, a leading cause of vision loss and blindness in working-age adults. Retinal screening is key to the early detection of diabetic retinopathy, with risk of vision loss significantly reduced by timely treatment thereafter. Despite this, retinal screening rates are low among this at-risk group. The objective of this study was to develop a theoretically-grounded, evidence-based retinal screening promotion leaflet, tailored to young adults with type 2 diabetes. Utilising the six steps of Intervention Mapping, our multidisciplinary planning team conducted a mixed-methods needs assessment (Step 1); identified modifiable behavioural determinants of screening behaviour and constructed a matrix of change objectives (Step 2); designed, reviewed and debriefed leaflet content with stakeholders (Steps 3 and 4); and developed program implementation and evaluation plans (Steps 5 and 6). Step 1 included in-depth qualitative interviews (N = 10) and an online survey that recruited a nationally-representative sample (N = 227), both informed by literature review. The needs assessment highlighted the crucial roles of knowledge (about diabetic retinopathy and screening), perception of personal risk, awareness of the approval of significant others and engagement with healthcare team, on retinal screening intentions and uptake. In Step 2, we selected five modifiable behavioural determinants to be targeted: knowledge, attitudes, normative beliefs, intention, and behavioural skills. In Steps 3 and 4, the "Who is looking after your eyes?" leaflet was developed, containing persuasive messages targeting each determinant and utilising engaging, cohort-appropriate imagery. In Steps 5 and 6, we planned Statewide implementation and designed a randomised controlled trial to evaluate the leaflet. This research provides an example of a systematic, evidence-based approach to the development of a simple health intervention designed to promote uptake of screening in accordance with national guidelines. The methods and findings illustrate how Intervention Mapping can be employed to develop tailored retinal screening promotion materials for specific priority populations. This paper has implications for future program planners and is intended to assist those wishing to use Intervention Mapping to create similar theoretically-driven, tailored resources.
Modeling of X-ray Images and Energy Spectra Produced by Stepping Lightning Leaders
NASA Astrophysics Data System (ADS)
Xu, Wei; Marshall, Robert A.; Celestin, Sebastien; Pasko, Victor P.
2017-11-01
Recent ground-based measurements at the International Center for Lightning Research and Testing (ICLRT) have greatly improved our knowledge of the energetics, fluence, and evolution of X-ray emissions during natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, using Monte Carlo simulations and the response matrix of unshielded detectors in the Thunderstorm Energetic Radiation Array (TERA), we calculate the energy spectra of X-rays as would be detected by TERA and directly compare with the observational data during event MSE 10-01. The good agreement obtained between TERA measurements and theoretical calculations supports the mechanism of X-ray production by thermal runaway electrons during the negative corona flash stage of stepping lightning leaders. Modeling results also suggest that measurements of X-ray bursts can be used to estimate the approximate range of potential drop of lightning leaders. Moreover, the X-ray images produced during the leader stepping process in natural negative CG discharges, including both the evolution and morphological features, are theoretically quantified. We show that the compact emission pattern as recently observed in X-ray images is likely produced by X-rays originating from the source region, and the diffuse emission pattern can be explained by the Compton scattering effects.
Thermal decomposition pathways of hydroxylamine: theoretical investigation on the initial steps.
Wang, Qingsheng; Wei, Chunyang; Pérez, Lisa M; Rogers, William J; Hall, Michael B; Mannan, M Sam
2010-09-02
Hydroxylamine (NH(2)OH) is an unstable compound at room temperature, and it has been involved in two tragic industrial incidents. Although experimental studies have been carried out to study the thermal stability of hydroxylamine, the detailed decomposition mechanism is still in debate. In this work, several density functional and ab initio methods were used in conjunction with several basis sets to investigate the initial thermal decomposition steps of hydroxylamine, including both unimolecular and bimolecular reaction pathways. The theoretical investigation shows that simple bond dissociations and unimolecular reactions are unlikely to occur. The energetically favorable initial step of decomposition pathways was determined as a bimolecular isomerization of hydroxylamine into ammonia oxide with an activation barrier of approximately 25 kcal/mol at the MPW1K level of theory. Because hydroxylamine is available only in aqueous solutions, solvent effects on the initial decomposition pathways were also studied using water cluster methods and the polarizable continuum model (PCM). In water, the activation barrier of the bimolecular isomerization reaction decreases to approximately 16 kcal/mol. The results indicate that the bimolecular isomerization pathway of hydroxylamine is more favorable in aqueous solutions. However, the bimolecular nature of this reaction means that more dilute aqueous solution will be more stable.
NASA Technical Reports Server (NTRS)
Huffman, S. D.; Burns, W. W., III; Wilson, T. G.; Owen, H. A., Jr.
1976-01-01
Implementations of a state-plane-trajectory control law for energy storage dc-to-dc converters are presented. Performance characteristics of experimental voltage step-up converter systems employing these implementations are reported and compared to theoretical predictions.
Subgroup conflicts? Try the psychodramatic "double triad method".
Verhofstadt-Denève, Leni M F
2012-04-01
The present article suggests the application of a psychodramatic action method for tackling subgroup conflicts in which the direct dialogue between representatives of two opposing subgroups is prepared step by step through an indirect dialogue strategy within two triads, a strategy known as the Double Triad Method (DTM). In order to achieve integration in the group as a whole, it is important that all the members of both subgroups participate actively during the entire process. The first part of the article briefly explores the theoretical background, with a special emphasis on the Phenomenological-Dialectical Personality Model (Phe-Di PModel). In the second part, the DTM procedure is systematically described through its five action stages, each accompanied with 1) a spatial representation of the consecutive actions, 2) some illustrative statements for each stage, and 3) a theoretical interpretation of the dialectically involved personality dimensions in both protagonists. The article concludes with a discussion and suggestions for more extensive applications of the DTM method, including the question of its relationships to Agazarian's functional subgrouping, psychodrama, and sociodrama.
Early Warning Signals of Ecological Transitions: Methods for Spatial Patterns
Brock, William A.; Carpenter, Stephen R.; Ellison, Aaron M.; Livina, Valerie N.; Seekell, David A.; Scheffer, Marten; van Nes, Egbert H.; Dakos, Vasilis
2014-01-01
A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data. PMID:24658137
Plasmonic hot carrier dynamics in solid-state and chemical systems for energy conversion
Narang, Prineha; Sundararaman, Ravishankar; Atwater, Harry A.
2016-06-11
Surface plasmons provide a pathway to efficiently absorb and confine light in metallic nanostructures, thereby bridging photonics to the nano scale. The decay of surface plasmons generates energetic ‘hot’ carriers, which can drive chemical reactions or be injected into semiconductors for nano-scale photochemical or photovoltaic energy conversion. Novel plasmonic hot carrier devices and architectures continue to be demonstrated, but the complexity of the underlying processes make a complete microscopic understanding of all the mechanisms and design considerations for such devices extremely challenging.Here,we review the theoretical and computational efforts to understand and model plasmonic hot carrier devices.We split the problem intomore » three steps: hot carrier generation, transport and collection, and review theoretical approaches with the appropriate level of detail for each step along with their predictions. As a result, we identify the key advances necessary to complete the microscopic mechanistic picture and facilitate the design of the next generation of devices and materials for plasmonic energy conversion.« less
Step - wise transient method - Influence of heat source inertia
NASA Astrophysics Data System (ADS)
Malinarič, Svetozár; Dieška, Peter
2016-07-01
Step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of materials. Theoretical models and experimental apparatus are presented and the influence of the heat source capacity are investigated using the experiment simulation. The specimens from low density polyethylene (LDPE) were measured yielding the thermal diffusivity 0.165 mm2/s and thermal conductivity 0.351 W/mK with the coefficient of variation less than 1.4 %. The heat source capacity caused the systematic error of the results smaller than 1 %.
NASA Technical Reports Server (NTRS)
Kao, M. H.; Bodenheimer, R. E.
1976-01-01
The tse computer's capability of achieving image congruence between temporal and multiple images with misregistration due to rotational differences is reported. The coordinate transformations are obtained and a general algorithms is devised to perform image rotation using tse operations very efficiently. The details of this algorithm as well as its theoretical implications are presented. Step by step procedures of image registration are described in detail. Numerous examples are also employed to demonstrate the correctness and the effectiveness of the algorithms and conclusions and recommendations are made.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Structural mechanics of 3-D braided preforms for composites. IV - The 4-step tubular braiding
NASA Technical Reports Server (NTRS)
Hammad, M.; El-Messery, M.; El-Shiekh, A.
1991-01-01
This paper presents the fundamentals of the 4-step 3D tubular braiding process and the structure of the preforms produced. Based on an idealized structural model, geometric relations between the structural parameters of the preform are analytically established. The effects of machine arrangement and operating conditions are discussed. Yarn retraction, yarn surface angle, outside diameter, and yarn volume fraction of the preform in terms of the pitch length, the inner diameter, and the machine arrangement are theoretically predicted and experimentally verified.
40 CFR 60.1120 - What steps must I complete for my siting analysis?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What steps must I complete for my... Requirements: Siting Analysis § 60.1120 What steps must I complete for my siting analysis? (a) For your siting analysis, you must complete five steps: (1) Prepare an analysis. (2) Make your analysis available to the...
Joseph, Paul; Tretsiakova-McNally, Svetlana
2015-01-01
Polymeric materials often exhibit complex combustion behaviours encompassing several stages and involving solid phase, gas phase and interphase. A wide range of qualitative, semi-quantitative and quantitative testing techniques are currently available, both at the laboratory scale and for commercial purposes, for evaluating the decomposition and combustion behaviours of polymeric materials. They include, but are not limited to, techniques such as: thermo-gravimetric analysis (TGA), oxygen bomb calorimetry, limiting oxygen index measurements (LOI), Underwriters Laboratory 94 (UL-94) tests, cone calorimetry, etc. However, none of the above mentioned techniques are capable of quantitatively deciphering the underpinning physiochemical processes leading to the melt flow behaviour of thermoplastics. Melt-flow of polymeric materials can constitute a serious secondary hazard in fire scenarios, for example, if they are present as component parts of a ceiling in an enclosure. In recent years, more quantitative attempts to measure the mass loss and melt-drip behaviour of some commercially important chain- and step-growth polymers have been accomplished. The present article focuses, primarily, on the experimental and some theoretical aspects of melt-flow behaviours of thermoplastics under heat/fire conditions. PMID:28793746
Using activity theory to study cultural complexity in medical education.
Frambach, Janneke M; Driessen, Erik W; van der Vleuten, Cees P M
2014-06-01
There is a growing need for research on culture, cultural differences and cultural effects of globalization in medical education, but these are complex phenomena to investigate. Socio-cultural activity theory seems a useful framework to study cultural complexity, because it matches current views on culture as a dynamic process situated in a social context, and has been valued in diverse fields for yielding rich understandings of complex issues and key factors involved. This paper explains how activity theory can be used in (cross-)cultural medical education research. We discuss activity theory's theoretical background and principles, and we show how these can be applied to the cultural research practice by discussing the steps involved in a cross-cultural study that we conducted, from formulating research questions to drawing conclusions. We describe how the activity system, the unit of analysis in activity theory, can serve as an organizing principle to grasp cultural complexity. We end with reflections on the theoretical and practical use of activity theory for cultural research and note that it is not a shortcut to capture cultural complexity: it is a challenge for researchers to determine the boundaries of their study and to analyze and interpret the dynamics of the activity system.
X-ray diffraction from nonuniformly stretched helical molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prodanovic, Momcilo; Irving, Thomas C.; Mijailovich, Srboljub M.
2016-04-18
The fibrous proteins in living cells are exposed to mechanical forces interacting with other subcellular structures. X-ray fiber diffraction is often used to assess deformation and movement of these proteins, but the analysis has been limited to the theory for fibrous molecular systems that exhibit helical symmetry. However, this approach cannot adequately interpret X-ray data from fibrous protein assemblies where the local strain varies along the fiber length owing to interactions of its molecular constituents with their binding partners. To resolve this problem a theoretical formulism has been developed for predicting the diffraction from individual helical molecular structures nonuniformly strainedmore » along their lengths. This represents a critical first step towards modeling complex dynamical systems consisting of multiple helical structures using spatially explicit, multi-scale Monte Carlo simulations where predictions are compared with experimental data in a `forward' process to iteratively generate ever more realistic models. Here the effects of nonuniform strains and the helix length on the resulting magnitude and phase of diffraction patterns are quantitatively assessed. Examples of the predicted diffraction patterns of nonuniformly deformed double-stranded DNA and actin filaments in contracting muscle are presented to demonstrate the feasibly of this theoretical approach.« less
Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field
Luo, Gang; Min, Wanli
2007-01-01
Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884
The cell monolayer trajectory from the system state point of view.
Stys, Dalibor; Vanek, Jan; Nahlik, Tomas; Urban, Jan; Cisar, Petr
2011-10-01
Time-lapse microscopic movies are being increasingly utilized for understanding the derivation of cell states and predicting cell future. Often, fluorescence and other types of labeling are not available or desirable, and cell state-definitions based on observable structures must be used. We present the methodology for cell behavior recognition and prediction based on the short term cell recurrent behavior analysis. This approach has theoretical justification in non-linear dynamics theory. The methodology is based on the general stochastic systems theory which allows us to define the cell states, trajectory and the system itself. We introduce the usage of a novel image content descriptor based on information contribution (gain) by each image point for the cell state characterization as the first step. The linkage between the method and the general system theory is presented as a general frame for cell behavior interpretation. We also discuss extended cell description, system theory and methodology for future development. This methodology may be used for many practical purposes, ranging from advanced, medically relevant, precise cell culture diagnostics to very utilitarian cell recognition in a noisy or uneven image background. In addition, the results are theoretically justified.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
Gabitto, Jorge; Tsouris, Costas
2018-01-19
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
Intermolecular cope-type hydroamination of alkenes and alkynes using hydroxylamines.
Moran, Joseph; Gorelsky, Serge I; Dimitrijevic, Elena; Lebrun, Marie-Eve; Bédard, Anne-Catherine; Séguin, Catherine; Beauchemin, André M
2008-12-31
The development of the Cope-type hydroamination as a method for the metal- and acid-free intermolecular hydroamination of hydroxylamines with alkenes and alkynes is described. Aqueous hydroxylamine reacts efficiently with alkynes in a Markovnikov fashion to give oximes and with strained alkenes to give N-alkylhydroxylamines, while unstrained alkenes are more challenging. N-Alkylhydroxylamines also display similar reactivity with strained alkenes and give modest to good yields with vinylarenes. Electron-rich vinylarenes lead to branched products while electron-deficient vinylarenes give linear products. A beneficial additive effect is observed with sodium cyanoborohydride, the extent of which is dependent on the structure of the hydroxylamine. The reaction conditions are found to be compatible with common protecting groups, free OH and NH bonds, as well as bromoarenes. Both experimental and theoretical results suggest the proton transfer step of the N-oxide intermediate is of vital importance in the intermolecular reactions of alkenes. Details are disclosed concerning optimization, reaction scope, limitations, and theoretical analysis by DFT, which includes a detailed molecular orbital description for the concerted hydroamination process and an exhaustive set of calculated potential energy surfaces for the reactions of various alkenes, alkynes, and hydroxylamines.
Pathways to childhood depressive symptoms: the role of social, cognitive, and genetic risk factors.
Lau, Jennifer Y F; Rijsdijk, Frühling; Gregory, Alice M; McGuffin, Peter; Eley, Thalia C
2007-11-01
Childhood depressive conditions have been explored from multiple theoretical approaches but with few empirical attempts to address the interrelationships among these different domains and their combined effects. In the present study, the authors examined different pathways through which social, cognitive, and genetic risk factors may be expressed to influence depressive symptoms in 300 pairs of child twins from a longitudinal study. Path analysis supported several indirect routes. First, risks associated with living in a step- or single-parent family and punitive parenting did not directly influence depressive outcome but were instead mediated through maternal depressive symptoms and child negative attributional style. Second, the effects of negative attributional style on depressive outcome were greatly exacerbated in the presence of precipitating negative life events. Third, independent of these social and cognitive risk mechanisms, modest genetic effects were also implicated in symptoms, with some indication that these risks are expressed through exposure to negative stressors. Together, these routes accounted for approximately 13% of total phenotypic variance in depressive symptoms. Theoretical and analytical implications of these results are discussed in the context of several design-related caveats. (c) 2007 APA.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabitto, Jorge; Tsouris, Costas
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
Arshad, Muhammad Nadeem; Bibi, Aisha; Mahmood, Tariq; Asiri, Abdullah M; Ayub, Khurshid
2015-04-03
We report here a comparative theoretical and experimental study of four triazine-based hydrazone derivatives. The hydrazones are synthesized by a three step process from commercially available benzil and thiosemicarbazide. The structures of all compounds were determined by using the UV-Vis., FT-IR, NMR (1H and 13C) spectroscopic techniques and finally confirmed unequivocally by single crystal X-ray diffraction analysis. Experimental geometric parameters and spectroscopic properties of the triazine based hydrazones are compared with those obtained from density functional theory (DFT) studies. The model developed here comprises of geometry optimization at B3LYP/6-31G (d, p) level of DFT. Optimized geometric parameters of all four compounds showed excellent correlations with the results obtained from X-ray diffraction studies. The vibrational spectra show nice correlations with the experimental IR spectra. Moreover, the simulated absorption spectra also agree well with experimental results (within 10-20 nm). The molecular electrostatic potential (MEP) mapped over the entire stabilized geometries of the compounds indicated their chemical reactivates. Furthermore, frontier molecular orbital (electronic properties) and first hyperpolarizability (nonlinear optical response) were also computed at the B3LYP/6-31G (d, p) level of theory.
Joseph, Paul; Tretsiakova-McNally, Svetlana
2015-12-15
Polymeric materials often exhibit complex combustion behaviours encompassing several stages and involving solid phase, gas phase and interphase. A wide range of qualitative, semi-quantitative and quantitative testing techniques are currently available, both at the laboratory scale and for commercial purposes, for evaluating the decomposition and combustion behaviours of polymeric materials. They include, but are not limited to, techniques such as: thermo-gravimetric analysis (TGA), oxygen bomb calorimetry, limiting oxygen index measurements (LOI), Underwriters Laboratory 94 (UL-94) tests, cone calorimetry, etc. However, none of the above mentioned techniques are capable of quantitatively deciphering the underpinning physiochemical processes leading to the melt flow behaviour of thermoplastics. Melt-flow of polymeric materials can constitute a serious secondary hazard in fire scenarios, for example, if they are present as component parts of a ceiling in an enclosure. In recent years, more quantitative attempts to measure the mass loss and melt-drip behaviour of some commercially important chain- and step-growth polymers have been accomplished. The present article focuses, primarily, on the experimental and some theoretical aspects of melt-flow behaviours of thermoplastics under heat/fire conditions.
Catalysis-Enhancement via Rotary Fluctuation of F1-ATPase
Watanabe, Rikiya; Hayashi, Kumiko; Ueno, Hiroshi; Noji, Hiroyuki
2013-01-01
Protein conformational fluctuations modulate the catalytic powers of enzymes. The frequency of conformational fluctuations may modulate the catalytic rate at individual reaction steps. In this study, we modulated the rotary fluctuation frequency of F1-ATPase (F1) by attaching probes with different viscous drag coefficients at the rotary shaft of F1. Individual rotation pauses of F1 between rotary steps correspond to the waiting state of a certain elementary reaction step of ATP hydrolysis. This allows us to investigate the impact of the frequency modulation of the rotary fluctuation on the rate of the individual reaction steps by measuring the duration of rotation pauses. Although phosphate release was significantly decelerated, the ATP-binding and hydrolysis steps were less sensitive or insensitive to the viscous drag coefficient of the probe. Brownian dynamics simulation based on a model similar to the Sumi-Marcus theory reproduced the experimental results, providing a theoretical framework for the role of rotational fluctuation in F1 rate enhancement. PMID:24268150
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
Physical Violence between Siblings: A Theoretical and Empirical Analysis
ERIC Educational Resources Information Center
Hoffman, Kristi L.; Kiecolt, K. Jill; Edwards, John N.
2005-01-01
This study develops and tests a theoretical model to explain sibling violence based on the feminist, conflict, and social learning theoretical perspectives and research in psychology and sociology. A multivariate analysis of data from 651 young adults generally supports hypotheses from all three theoretical perspectives. Males with brothers have…
Kada, T; Asahi, S; Kaizu, T; Harada, Y; Tamaki, R; Okada, Y; Kita, T
2017-07-19
We studied the effects of the internal electric field on two-step photocarrier generation in InAs/GaAs quantum dot superlattice (QDSL) intermediate-band solar cells (IBSCs). The external quantum efficiency of QDSL-IBSCs was measured as a function of the internal electric field intensity, and compared with theoretical calculations accounting for interband and intersubband photoexcitations. The extra photocurrent caused by the two-step photoexcitation was maximal for a reversely biased electric field, while the current generated by the interband photoexcitation increased monotonically with increasing electric field intensity. The internal electric field in solar cells separated photogenerated electrons and holes in the superlattice (SL) miniband that played the role of an intermediate band, and the electron lifetime was extended to the microsecond scale, which improved the intersubband transition strength, therefore increasing the two-step photocurrent. There was a trade-off relation between the carrier separation enhancing the two-step photoexcitation and the electric-field-induced carrier escape from QDSLs. These results validate that long-lifetime electrons are key to maximising the two-step photocarrier generation in QDSL-IBSCs.
NASA Astrophysics Data System (ADS)
Kopytova, Taisiya
2016-01-01
When studying isolated brown dwarfs and directly imaged exoplanets with insignificant orbital motion,we have to rely on theoretical models to determine basic parameters such as mass, age, effective temperature, and surface gravity.While stellar and atmospheric models are rapidly evolving, we need a powerful tool to test and calibrate them.In my thesis, I focussed on comparing interior and atmospheric models with observational data, in the effort of taking into account various systematic effects that can significantly influence the data analysis.As a first step, about 460 candidate member os the Hyades were screened for companions using diffraction limited imaging observation (both our own data and archival data). As a result I could establish the single star sequence for the Hyades comprising about 250 stars (Kopytova et al. 2015, accepted to A&A). Open clusters contain many coeval objects of the same chemical composition and age, and spanning a range of masses. We compare the obtained sequence with a set of theoretical isochrones identifying systematic offsets and revealing probable issues in the models.However, there are many cases when it is impossible to test models before comparing them with observations.As a second step, we apply atmospheric models for constraining parameters of WISE 0855-07, the coolest known Y dwarf(Kopytova et al. 2014, ApJ 797, 3). We demonstrate the limits of constraining effective temperature and the presence/absence of water clouds.As a third step, we introduce a novel method to take into account the above-mentioned systematics. We construct a "systematics vector" that allows us to reveal problematic wavelength ranges when fitting atmospheric models to observed near-infrared spectraof brown dwarfs and exoplanets (Kopytova et al., in prep.). This approach plays a crucial role when retrieving abundances for these objects, in particularly, a C/O ratio. The latter parameter is an important key to formation scenarios of brown dwarf and exoplanets. We show the way to constrain a C/O ratio while eliminating systematics effects, which significantly improves the reliability of a final result and our conclusions about formation history of certain exoplanets and brown dwarfs.
Tolerant compressed sensing with partially coherent sensing matrices
NASA Astrophysics Data System (ADS)
Birnbaum, Tobias; Eldar, Yonina C.; Needell, Deanna
2017-08-01
Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.
Transient-state kinetic approach to mechanisms of enzymatic catalysis.
Fisher, Harvey F
2005-03-01
Transient-state kinetics by its inherent nature can potentially provide more directly observed detailed resolution of discrete events in the mechanistic time courses of enzyme-catalyzed reactions than its more widely used steady-state counterpart. The use of the transient-state approach, however, has been severely limited by the lack of any theoretically sound and applicable basis of interpreting the virtual cornucopia of time and signal-dependent phenomena that it provides. This Account describes the basic kinetic behavior of the transient state, critically examines some currently used analytic methods, discusses the application of a new and more soundly based "resolved component transient-state time-course method" to the L-glutamate-dehydrogenase reaction, and establishes new approaches for the analysis of both single- and multiple-step substituted transient-state kinetic isotope effects.
NASA Astrophysics Data System (ADS)
Xing, Li; Quan, Wei; Fan, Wenfeng; Li, Rujie; Jiang, Liwei; Fang, Jiancheng
2018-05-01
The frequency-response and dynamics of a dual-axis spin-exchange-relaxation-free (SERF) atomic magnetometer are investigated by means of transfer function analysis. The frequency-response at different bias magnetic fields is tested to demonstrate the effect of the residual magnetic field. The resonance frequency of alkali atoms and magnetic linewidth can be obtained simultaneously through our theoretical model. The coefficient of determination of the fitting results is superior to 0.995 with 95% confidence bounds. Additionally, step responses are applied to analyze the dynamics of the control system and the effect of imperfections. Finally, a noise-limited magnetic field resolution of 15 fT {{\\sqrt{Hz}}-1} has been achieved for our dual-axis SERF atomic magnetometer through magnetic field optimization.
Distributed Cognition on the road: Using EAST to explore future road transportation systems.
Banks, Victoria A; Stanton, Neville A; Burnett, Gary; Hermawati, Setia
2018-04-01
Connected and Autonomous Vehicles (CAV) are set to revolutionise the way in which we use our transportation system. However, we do not fully understand how the integration of wireless and autonomous technology into the road transportation network affects overall network dynamism. This paper uses the theoretical principles underlying Distributed Cognition to explore the dependencies and interdependencies that exist between system agents located within the road environment, traffic management centres and other external agencies in both non-connected and connected transportation systems. This represents a significant step forward in modelling complex sociotechnical systems as it shows that the principles underlying Distributed Cognition can be applied to macro-level systems using the visual representations afforded by the Event Analysis of Systemic Teamwork (EAST) method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
NASA Astrophysics Data System (ADS)
Peng, Xuefeng; Wu, Pinghui; Han, Yinxia; Hu, Guoqiang
2014-11-01
The properties of amplified spontaneous emission (ASE) in CdSe/ZnS quantum dot (QD) doped step-index polymer optical fibers (POFs) were computationally analyzed in this paper. A theoretical model based on the rate equations between two main energy levels of CdSe/ZnS QD was built in terms of time (t), distance traveled by light (z) and wavelength (λ), which can describe the ASE successfully. Through analyzing the spectral evolution with distance of the pulses propagating along the CdSe/ZnS QD doped POFs, dependences of the ASE threshold and the slope efficiency on the numerical aperture were obtained. Compared to the ASE in common dye-doped POFs, the pump threshold was just about 1/1000, but the slope efficiency was much higher.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Yang, Runtao; Zhang, Yumin; Dong, Mingli; Lou, Xiaoping
2018-04-01
In this paper, a metallic-packaging fiber Bragg grating temperature sensor characterized by a strain insensitive design is demonstrated. The sensor is fabricated by the one-step ultrasonic welding technique using type-II fiber Bragg grating combined with an aluminum alloy substrate. Finite element analysis is used to perform theoretical evaluation. The result of the experiment illustrates that the metallic-packaging temperature sensor is insensitive to longitudinal strain. The sensor's temperature sensitivity is 36 pm/°C over the range of 50-110 °C, with the correlation coefficient (R2) being 0.999. The sensor's temporal response is 40 s at a sudden temperature change from 21 °C to 100 °C. The proposed sensor can be applied on reliable and precise temperature measurement.
Quilting as a generative activity: Studying those who make quilts for wounded service members.
Cheek, Cheryl; Yaure, Robin G
2017-01-01
A qualitative study of 24 quilters examined their experiences creating and delivering quilts to wounded service members who served in the Iraq and Afghanistan conflicts. Using Erikson's (1963) perspective on generativity and Baumeister and Vohs's (2002) theory of motivation as theoretical frameworks, along with McCracken's (1988) five-step analysis model, we looked at the part motivation played in this process. The results were that respondents wanted to supply quilts in response to their own family histories of military involvement, to support friends/acquaintances with family in the military, and to make a difference to those who seemed young and badly wounded. Some respondents described being affected by the reactions of quilt recipients and of healing from their own traumas and grief.
TiO2-catalyzed synthesis of sugars from formaldehyde in extraterrestrial impacts on the early Earth.
Civiš, Svatopluk; Szabla, Rafał; Szyja, Bartłomiej M; Smykowski, Daniel; Ivanek, Ondřej; Knížek, Antonín; Kubelík, Petr; Šponer, Jiří; Ferus, Martin; Šponer, Judit E
2016-03-16
Recent synthetic efforts aimed at reconstructing the beginning of life on our planet point at the plausibility of scenarios fueled by extraterrestrial energy sources. In the current work we show that beyond nucleobases the sugar components of the first informational polymers can be synthesized in this way. We demonstrate that a laser-induced high-energy chemistry combined with TiO2 catalysis readily produces a mixture of pentoses, among them ribose, arabinose and xylose. This chemistry might be highly relevant to the Late Heavy Bombardment period of Earth's history about 4-3.85 billion years ago. In addition, we present an in-depth theoretical analysis of the most challenging step of the reaction pathway, i.e., the TiO2-catalyzed dimerization of formaldehyde leading to glycolaldehyde.
Extended use of two crossed Babinet compensators for wavefront sensing in adaptive optics
NASA Astrophysics Data System (ADS)
Paul, Lancelot; Kumar Saxena, Ajay
2010-12-01
An extended use of two crossed Babinet compensators as a wavefront sensor for adaptive optics applications is proposed. This method is based on the lateral shearing interferometry technique in two directions. A single record of the fringes in a pupil plane provides the information about the wavefront. The theoretical simulations based on this approach for various atmospheric conditions and other errors of optical surfaces are provided for better understanding of this method. Derivation of the results from a laboratory experiment using simulated atmospheric conditions demonstrates the steps involved in data analysis and wavefront evaluation. It is shown that this method has a higher degree of freedom in terms of subapertures and on the choice of detectors, and can be suitably adopted for real-time wavefront sensing for adaptive optics.
Autopoiesis and cognition in the game of life.
Beer, Randall D
2004-01-01
Maturana and Varela's notion of autopoiesis has the potential to transform the conceptual foundation of biology as well as the cognitive, behavioral, and brain sciences. In order to fully realize this potential, however, the concept of autopoiesis and its many consequences require significant further theoretical and empirical development. A crucial step in this direction is the formulation and analysis of models of autopoietic systems. This article sketches the beginnings of such a project by examining a glider from Conway's game of life in autopoietic terms. Such analyses can clarify some of the key ideas underlying autopoiesis and draw attention to some of the central open issues. This article also examines the relationship between an autopoietic perspective on cognition and recent work on dynamical approaches to the behavior and cognition of situated, embodied agents.
Astronomical Instrumentation Systems Quality Management Planning: AISQMP
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse
2017-06-01
The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.
Stochastic Surface Mesh Reconstruction
NASA Astrophysics Data System (ADS)
Ozendi, M.; Akca, D.; Topan, H.
2018-05-01
A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.
Selection of Yeasts as Starter Cultures for Table Olives: A Step-by-Step Procedure
Bevilacqua, Antonio; Corbo, Maria Rosaria; Sinigaglia, Milena
2012-01-01
The selection of yeasts intended as starters for table olives is a complex process, including a characterization step at laboratory level and a validation at lab level and factory-scale. The characterization at lab level deals with the assessment of some technological traits (growth under different temperatures and at alkaline pHs, effect of salt, and for probiotic strains the resistance to preservatives), enzymatic activities, and some new functional properties (probiotic traits, production of vitamin B-complex, biological debittering). The paper reports on these traits, focusing both on their theoretical implications and lab protocols; moreover, there are some details on predictive microbiology for yeasts of table olives and on the use of multivariate approaches to select suitable starters. PMID:22666220
Astronomical Instrumentation Systems Quality Management Planning: AISQMP (Abstract)
NASA Astrophysics Data System (ADS)
Goldbaum, J.
2017-12-01
(Abstract only) The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.
Watershed Management Optimization Support Tool (WMOST) v1: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a screening model that is spatially lumped with options for a daily or monthly time step. It is specifically focused on modeling the effect of management decisions on the watershed. The model considers water flows and ...
Gradl-Dietsch, G; Menon, A K; Gürsel, A; Götzenich, A; Hatam, N; Aljalloud, A; Schrading, S; Hölzl, F; Knobe, M
2018-02-01
The aim of this study was to assess the impact of different teaching interventions in a peer-teaching environment on basic echocardiography skills and to examine the influence of gender on learning outcomes. We randomly assigned 79 s year medical students (55 women, 24 men) to one of four groups: peer teaching (PT), peer teaching using Peyton's four-step approach (PPT), team based learning (TBL) and video-based learning (VBL). All groups received theoretical and practical hands-on training according to the different approaches. Using a pre-post-design we assessed differences in theoretical knowledge [multiple choice (MC) exam], practical skills (Objective Structured Practical Examination, OSPE) and evaluation results with respect to gender. There was a significant gain in theoretical knowledge for all students. There were no relevant differences between the four groups regarding the MC exam and OSPE results. The majority of students achieved good or very good results. Acceptance of the peer-teaching concept was moderate and all students preferred medical experts to peer tutors even though the overall rating of the instructors was fairly good. Students in the Video group would have preferred a different training method. There was no significant effect of gender on evaluation results. Using different peer-teaching concepts proved to be effective in teaching basic echocardiography. Gender does not seem to have an impact on effectiveness of the instructional approach. Qualitative analysis revealed limited acceptance of peer teaching and especially of video-based instruction.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
General principles for the treatment of non-infectious uveitis.
Díaz-Llopis, Manuel; Gallego-Pinazo, Roberto; García-Delpech, Salvador; Salom-Alonso, David
2009-09-01
Ocular inflammatory disorders constitute a sight-threatening group of diseases that might be managed according to their severity. Their treatment guidelines experience constant changes with new agents that improve the results obtained with former drugs. Nowadays we can make use of a five step protocol in which topical, periocular and systemic corticosteroids remain as the main therapy for non infectious uveitis. In addition, immunosuppresive drugs can be added in order to enhance the anti-inflammatory effects and to develop the role of corticosteroid-saving agents. These can be organized in four other steps: Cyclosporine and Methotrexate in a second one; Azathioprine, Mycophenolate Mofetil and Tacrolimus in a third step; biological anti-TNF drugs in fourth position; and a theoretical last one with Cyclophosphamide and Chlorambucil. In the present review we go through the main characteristics and complications of all these treatments and make a rational of this five-step treatment protocol for non infectious posterior uveitis.
Directivity pattern of the sound radiated from axisymmetric stepped plates.
He, Xiping; Yan, Xiuli; Li, Na
2016-08-01
For the purpose of optimal design and efficient utilization of the kind of stepped plate radiator in air, in this contribution, an approach for calculation of the directivity pattern of the sound radiated from a stepped plate in flexural vibration with a free edge is developed based on Kirchhoff-Love hypothesis and Rayleigh integral principle. Experimental tests of directivity pattern for a fabricated flat plate and two fabricated plates with one and two step radiators were carried out. It shows that the configuration of the measured directivity patterns by the proposed analytic approach is similar to those of the calculated approach. Comparison of the agreement between the calculated directivity pattern of a stepped plate and its corresponding theoretical piston show that the former radiator is equivalent to the latter, and the diffraction field generated by the unbaffled upper surface may be small. It also shows that the directivity pattern of a stepped radiator is independent of the metallic material but dependent on the thickness of base plate and resonant frequency. The thicker the thickness of base plate, the more directive the radiation is. The proposed analytic approach in this work may be adopted for any other plates with multi-steps.
NASA Technical Reports Server (NTRS)
Kuhn, Reinhard; Wagner, Horst; Mosher, Richard A.; Thormann, Wolfgang
1987-01-01
Isoelectric focusing in the continuous flow mode can be more quickly and economically performed by admitting a stepwise pH gradient composed of simple buffers instead of uniform mixtures of synthetic carrier ampholytes. The time-consuming formation of the pH gradient by the electric field is thereby omitted. The stability of a three-step system with arginine - morpholinoethanesulfonic acid/glycylglycine - aspartic acid is analyzed theoretically by one-dimensional computer simulation as well as experimentally at various flow rates in a continuous flow apparatus. Excellent agreement between experimental and theoretical data was obtained. This metastable configuration was found to be suitable for focusing of proteins under continuous flow conditions. The influence of various combinations of electrolytes and membranes between electrophoresis chamber and electrode compartments is also discussed.
Spatiotemporal Variables of Able-bodied and Amputee Sprinters in Men's 100-m Sprint.
Hobara, H; Kobayashi, Y; Mochimaru, M
2015-06-01
The difference in world records set by able-bodied sprinters and amputee sprinters in the men's 100-m sprint is still approximately 1 s (as of 28 March 2014). Theoretically, forward velocity in a 100-m sprint is the product of step frequency and step length. The goal of this study was to examine the hypothesis that differences in the sprint performance of able-bodied and amputee sprinters would be due to a shorter step length rather than lower step frequency. Men's elite-level 100-m races with a total of 36 able-bodied, 25 unilateral and 17 bilateral amputee sprinters were analyzed from the publicly available internet broadcasts of 11 races. For each run of each sprinter, the average forward velocity, step frequency and step length over the whole 100-m distance were analyzed. The average forward velocity of able-bodied sprinters was faster than that of the other 2 groups, but there was no significant difference in average step frequency among the 3 groups. However, the average step length of able-bodied sprinters was significantly longer than that of the other 2 groups. These results suggest that the differences in sprint performance between 2 groups would be due to a shorter step length rather than lower step frequency. © Georg Thieme Verlag KG Stuttgart · New York.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Experimental and theoretical study of magnetohydrodynamic ship models.
Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
Experimental and theoretical study of magnetohydrodynamic ship models
Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques. PMID:28665941
The boundary structure in the analysis of reversibly interacting systems by sedimentation velocity.
Zhao, Huaying; Balbo, Andrea; Brown, Patrick H; Schuck, Peter
2011-05-01
Sedimentation velocity (SV) experiments of heterogeneous interacting systems exhibit characteristic boundary structures that can usually be very easily recognized and quantified. For slowly interacting systems, the boundaries represent concentrations of macromolecular species sedimenting at different rates, and they can be interpreted directly with population models based solely on the mass action law. For fast reactions, migration and chemical reactions are coupled, and different, but equally easily discernable boundary structures appear. However, these features have not been commonly utilized for data analysis, for the lack of an intuitive and computationally simple model. The recently introduced effective particle theory (EPT) provides a suitable framework. Here, we review the motivation and theoretical basis of EPT, and explore practical aspects for its application. We introduce an EPT-based design tool for SV experiments of heterogeneous interactions in the software SEDPHAT. As a practical tool for the first step of data analysis, we describe how the boundary resolution of the sedimentation coefficient distribution c(s) can be further improved with a Bayesian adjustment of maximum entropy regularization to the case of heterogeneous interactions between molecules that have been previously studied separately. This can facilitate extracting the characteristic boundary features by integration of c(s). In a second step, these are assembled into isotherms as a function of total loading concentrations and fitted with EPT. Methods for addressing concentration errors in isotherms are discussed. Finally, in an experimental model system of alpha-chymotrypsin interacting with soybean trypsin inhibitor, we show that EPT provides an excellent description of the experimental sedimentation boundary structure of fast interacting systems. Published by Elsevier Inc.
Kearney, Sinéad M.; Kilcawley, Niamh A.; Early, Philip L.; Glynn, Macdara T.; Ducrée, Jens
2016-01-01
Here we present retrieval of Peripheral Blood Mononuclear Cells by density-gradient medium based centrifugation for subsequent analysis of the leukocytes on an integrated microfluidic “Lab-on-a-Disc” cartridge. Isolation of white blood cells constitutes a critical sample preparation step for many bioassays. Centrifugo-pneumatic siphon valves are particularly suited for blood processing as they function without need of surface treatment and are ‘low-pass’, i.e., holding at high centrifugation speeds and opening upon reduction of the spin rate. Both ‘hydrostatically’ and ‘hydrodynamically’ triggered centrifugo-pneumatic siphon valving schemes are presented. Firstly, the geometry of the pneumatic chamber of hydrostatically primed centrifugo-pneumatic siphon valves is optimised to enable smooth and uniform layering of blood on top of the density-gradient medium; this feature proves to be key for efficient Peripheral Blood Mononuclear Cell extraction. A theoretical analysis of hydrostatically primed valves is also presented which determines the optimum priming pressure for the individual valves. Next, ‘dual siphon’ configurations for both hydrostatically and hydrodynamically primed centrifugo-pneumatic siphon valves are introduced; here plasma and Peripheral Blood Mononuclear Cells are extracted through a distinct siphon valve. This work represents a first step towards enabling on disc multi-parameter analysis. Finally, the efficiency of Peripheral Blood Mononuclear Cells extraction in these structures is characterised using a simplified design. A microfluidic mechanism, which we termed phase switching, is identified which affects the efficiency of Peripheral Blood Mononuclear Cell extraction. PMID:27167376
Dios, Federico; Recolons, Jaume; Rodríguez, Alejandro; Batet, Oscar
2008-02-04
Temporal analysis of the irradiance at the detector plane is intended as the first step in the study of the mean fade time in a free optical communication system. In the present work this analysis has been performed for a Gaussian laser beam propagating in the atmospheric turbulence by means of computer simulation. To this end, we have adapted a previously known numerical method to the generation of long phase screens. The screens are displaced in a transverse direction as the wave is propagated, in order to simulate the wind effect. The amplitude of the temporal covariance and its power spectrum have been obtained at the optical axis, at the beam centroid and at a certain distance from these two points. Results have been worked out for weak, moderate and strong turbulence regimes and when possible they have been compared with theoretical models. These results show a significant contribution of beam wander to the temporal behaviour of the irradiance, even in the case of weak turbulence. We have also found that the spectral bandwidth of the covariance is hardly dependent on the Rytov variance.
Reconsideration of dynamic force spectroscopy analysis of streptavidin-biotin interactions.
Taninaka, Atsushi; Takeuchi, Osamu; Shigekawa, Hidemi
2010-05-13
To understand and design molecular functions on the basis of molecular recognition processes, the microscopic probing of the energy landscapes of individual interactions in a molecular complex and their dependence on the surrounding conditions is of great importance. Dynamic force spectroscopy (DFS) is a technique that enables us to study the interaction between molecules at the single-molecule level. However, the obtained results differ among previous studies, which is considered to be caused by the differences in the measurement conditions. We have developed an atomic force microscopy technique that enables the precise analysis of molecular interactions on the basis of DFS. After verifying the performance of this technique, we carried out measurements to determine the landscapes of streptavidin-biotin interactions. The obtained results showed good agreement with theoretical predictions. Lifetimes were also well analyzed. Using a combination of cross-linkers and the atomic force microscope that we developed, site-selective measurement was carried out, and the steps involved in bonding due to microscopic interactions are discussed using the results obtained by site-selective analysis.
Leadership: validation of a self-report scale.
Dussault, Marc; Frenette, Eric; Fernet, Claude
2013-04-01
The aim of this paper was to propose and test the factor structure of a new self-report questionnaire on leadership. A sample of 373 school principals in the Province of Quebec, Canada completed the initial 46-item version of the questionnaire. In order to obtain a questionnaire of minimal length, a four-step procedure was retained. First, items analysis was performed using Classical Test Theory. Second, Rasch analysis was used to identify non-fitting or overlapping items. Third, a confirmatory factor analysis (CFA) using structural equation modelling was performed on the 21 remaining items to verify the factor structure of the scale. Results show that the model with a single third-order dimension (leadership), two second-order dimensions (transactional and transformational leadership), and one first-order dimension (laissez-faire leadership) provides a good fit to the data. Finally, invariance of factor structure was assessed with a second sample of 222 vice-principals in the Province of Quebec, Canada. This model is in agreement with the theoretical model developed by Bass (1985), upon which the questionnaire is based.
Bibliometric analysis of theses and dissertations on prematurity in the Capes database.
Pizzani, Luciana; Lopes, Juliana de Fátima; Manzini, Mariana Gurian; Martinez, Claudia Maria Simões
2012-01-01
To perform a bibliometric analysis of theses and dissertations on prematurity in the Capes database from 1987 to 2009. This is a descriptive study that used the bibliometric approach for the production of indicators of scientific production. Operationally, the methodology was developed in four steps: 1) construction of the theoretical framework; 2) data collection sourced from the abstracts of theses and dissertations available in the Capes Thesis Database which presented the issue of prematurity in the period 1987 to 2009; 3) organization, processing and construction of bibliometric indicators; 4) analysis and interpretation of results. Increase in the scientific literature on prematurity during the period 1987 to 2009; production is represented mostly by dissertations; the institution that received prominence was the Universidade de São Paulo. The studies are directed toward the low birth weight and very low birth weight preterm newborn, encompassing the social, biological and multifactorial causes of prematurity. There is a qualified, diverse and substantial scientific literature on prematurity developed in various graduate programs of higher education institutions in Brazil.
Using ICT techniques for improving mechatronic systems' dependability
NASA Astrophysics Data System (ADS)
Miron, Emanuel; Silva, João P. M. A.; Machado, José; Olaru, Dumitru; Prisacaru, Gheorghe
2013-10-01
The use of analysis techniques for industrial controller's analysis, such as Simulation and Formal Verification, is complex on industrial context. This complexity is due to the fact that such techniques require sometimes high investment in specific skilled human resources that have sufficient theoretical knowledge in those domains. This paper aims, mainly, to show that it is possible to obtain a timed automata model for formal verification purposes, considering the CAD model of a mechanical component. This systematic approach can be used, by companies, for the analysis of industrial controllers programs. For this purpose, it is discussed, in the paper, the best way to systematize these procedures, and this paper describes, only, the first step of a complex process and promotes a discussion of the main difficulties that can be found and a possibility for handle those difficulties. A library for formal verification purposes is obtained from original 3D CAD models using Software as a Service platform (SaaS) that, nowadays, has become a common deliverable model for many applications, because SaaS is typically accessed by users via internet access.
Research on effects of phase error in phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
Use of scatterometry for resist process control
NASA Astrophysics Data System (ADS)
Bishop, Kenneth P.; Milner, Lisa-Michelle; Naqvi, S. Sohail H.; McNeil, John R.; Draper, B. L.
1992-06-01
The formation of resist lines having submicron critical dimensions (CDs) is a complex multistep process, requiring precise control of each processing step. Optimization of parameters for each processing step may be accomplished through theoretical modeling techniques and/or the use of send-ahead wafers followed by scanning electron microscope measurements. Once the optimum parameters for any process having been selected, (e.g., time duration and temperature for post-exposure bake process), no in-situ CD measurements are made. In this paper we describe the use of scatterometry to provide this essential metrology capability. It involves focusing a laser beam on a periodic grating and predicting the shape of the grating lines from a measurement of the scattered power in the diffraction orders. The inverse prediction of lineshape from a measurement of the scatter power is based on a vector diffraction analysis used in conjunction with photolithography simulation tools to provide an accurate scatter model for latent image gratings. This diffraction technique has previously been applied to looking at latent image grating formation, as exposure is taking place. We have broadened the scope of the application and consider the problem of determination of optimal focus.
Gold nanoparticles with patterned surface monolayers for nanomedicine: current perspectives.
Pengo, Paolo; Şologan, Maria; Pasquato, Lucia; Guida, Filomena; Pacor, Sabrina; Tossi, Alessandro; Stellacci, Francesco; Marson, Domenico; Boccardo, Silvia; Pricl, Sabrina; Posocco, Paola
2017-12-01
Molecular self-assembly is a topic attracting intense scientific interest. Various strategies have been developed for construction of molecular aggregates with rationally designed properties, geometries, and dimensions that promise to provide solutions to both theoretical and practical problems in areas such as drug delivery, medical diagnostics, and biosensors, to name but a few. In this respect, gold nanoparticles covered with self-assembled monolayers presenting nanoscale surface patterns-typically patched, striped or Janus-like domains-represent an emerging field. These systems are particularly intriguing for use in bio-nanotechnology applications, as presence of such monolayers with three-dimensional (3D) morphology provides nanoparticles with surface-dependent properties that, in turn, affect their biological behavior. Comprehensive understanding of the physicochemical interactions occurring at the interface between these versatile nanomaterials and biological systems is therefore crucial to fully exploit their potential. This review aims to explore the current state of development of such patterned, self-assembled monolayer-protected gold nanoparticles, through step-by-step analysis of their conceptual design, synthetic procedures, predicted and determined surface characteristics, interactions with and performance in biological environments, and experimental and computational methods currently employed for their investigation.
Algorithm Engineering: Concepts and Practice
NASA Astrophysics Data System (ADS)
Chimani, Markus; Klein, Karsten
Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Monascal, Yeljair; Gallardo, Eliana; Cartaya, Loriett; Maldonado, Alexis; Bentarcurt, Yenner; Chuchani, Gabriel
2018-01-01
Keto-enol tautomeric equilibrium and the mechanism of thermal conversion of 2- and 4-hydroxyacetophenone in gas phase have been studied by means of electronic structure calculations using density functional theory (DFT). A topological analysis of electron density evidence that the structure of keto and enol forms of 2-hydroxyacetophenone are stabilised by a relatively strong intramolecular hydrogen bond. 2- and 4-hydroxyacetophenone undergo deacetylation reactions yielding phenol and ketene. Two possible mechanisms are considered for these eliminations: the process takes place from the keto form (mechanism A), or occurs from the enolic form of the substrate (mechanism B). Quantum chemical calculations support the mechanism B, being found a good agreement with the experimental activation parameters. These results suggest that the rate-limiting step is the reaction of the enol through a concerted, non-synchronous, semi-polar, four-membered cyclic transition state (TS). The most advanced reaction coordinate in the TS is the rupture of O1...H1 bond, with an evolution in the order of 79.7%-80.9%. Theoretical results also suggest a three-step mechanism for the phenyl acetate formation from 2-hydroxyacetophenone.
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
Theoretical Predictions of Cross-Sections of the Super-Heavy Elements
NASA Astrophysics Data System (ADS)
Bouriquet, B.; Kosenko, G.; Abe, Y.
The evaluation of the residue cross-sections of reactionssynthesising superheavy elements has been achieved by the combination of the two-step model for fusion and the evaporation code (KEWPIE) for survival probability. The theoretical scheme of those calculations is presented, and some encouraging results are given, together with some difficulties. With this approach, the measured excitation functions of the 1n reactions producing elements with Z=108, 110, 111 and 112 are well reproduced. Thus, the model has been used to predict the cross-sections of the reactions leading to the formation of the elements with Z=113 and Z=114.
Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bomble, Yannick J; St. John, Peter C; Crowley, Michael F
2017-10-18
A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.
Theory of the Trojan-Horse Method - From the Original Idea to Actual Applications
NASA Astrophysics Data System (ADS)
Typel, Stefan
2018-01-01
The origin and the main features of the Trojan-horse (TH) method are delineated starting with the original idea of Gerhard Baur. Basic theoretical considerations, general experimental conditions and possible problems are discussed. Significant steps in experimental studies towards the implementation of the TH method and the development of the theoretical description are presented. This lead to the successful application of the TH approach by Claudio Spitaleri and his group to determine low-energy cross section that are relevant for astrophysics. An outlook with possible developments in the future are given.
Education and Training for Sustainable Tourism: Problems, Possibilities and Cautious First Steps.
ERIC Educational Resources Information Center
Gough, Stephen; Scott, William
1999-01-01
Advances a possible theoretical approach to education for sustainable tourism and describes a small-scale research project based on this approach. Seeks to integrate education for sustainable tourism into an established management curriculum using an innovative technique based on the idea of an adaptive concept. (Author/CCM)
Addiction, Family Treatment, and Healing Resources: An Interview with David Berenson.
ERIC Educational Resources Information Center
Morgan, Oliver J.
1998-01-01
Interviews Berenson on his distinctive approach to therapy with families and couples affected by addiction and provides references. Considers background and theoretical influences, and changes over time. Discusses the use of "phasing," collaboration with Twelve Step programs, and integration of a spiritual perspective into family and…
Five Secondary Teachers: Creating and Presenting a Teaching Persona
ERIC Educational Resources Information Center
Davis, Janine Schank
2011-01-01
This qualitative study investigates the ways that five secondary teachers developed and presented personae. The researcher collected and analyzed data using a theoretical frame based in social psychology, including Goffman's Presentation of Self in Everyday Life (1959), and Miles and Huberman's (1994) three-step approach to qualitative data…
The Eight-Step Method to Great Group Work
ERIC Educational Resources Information Center
Steward, Sally; Swango, Jill
2004-01-01
Many science teachers already understand the importance of cooperative learning in the classroom and during lab exercises. From a theoretical perspective, students working in groups learn teamwork and discussion techniques as well as how to formulate and ask questions amongst themselves. From a practical standpoint, group work saves precious…
The Teaching of EFL Writing in Indonesia
ERIC Educational Resources Information Center
Ariyanti
2016-01-01
Writing is one of the most important aspects in English language acquisition. Teaching writing has its own challenges since there are some steps and requirements that teachers should prepare to undertake in the classroom. This article is aimed to discuss teaching and learning writing in the classroom based on theoretical conceptualisation. In…
Developing Musical Creativity through Improvisation in the Large Performance Classroom
ERIC Educational Resources Information Center
Norgaard, Martin
2017-01-01
Improvisation is an ideal way to develop musical creativity in ensemble settings. This article describes two prominent theoretical frameworks related to improvisation. Next, based on research with developing and expert improvisers, it discusses how to sequence improvisatory activities so that students feel accomplished at every step. Finally, the…
Collaborative Textbook Selection: A Case Study Leading to Practical and Theoretical Considerations
ERIC Educational Resources Information Center
Czerwionka, Lori; Gorokhovsky, Bridget
2015-01-01
This case study developed a collaborative approach to the selection of a Spanish language textbook. The collaborative process consisted of six steps, detailed in this article: team building, generating evaluation criteria, formulating a meaningful rubric, selecting prospective textbooks, calculating rubric results, and reflectively reviewing…
Assisted Imitation: First Steps in the Seed Model of Language Development
ERIC Educational Resources Information Center
Zukow-Goldring, Patricia
2012-01-01
In this article, I present the theoretical and empirical grounding for the SEED ("situated", culturally "embodied", "emergent", "distributed") model of early language development. A fundamental prerequisite to the emergence of language behavior/communication is a hands-on, active understanding of everyday events (, and ). At the heart of this…
Developmental Mathematics and the Lansing Community College Math Lab.
ERIC Educational Resources Information Center
Rotman, Jack W.
Based on an extensive literature search, this paper reviews recent research and theoretical studies and discusses their applicability to Lansing Community College's (LCC's) Mathematics Laboratory. After noting the steps taken in data collection, part I describes LCC and its Math Lab, which offers developmental courses in a self-paced, mastery…
Management Strategies for Promoting Teacher Collective Learning
ERIC Educational Resources Information Center
Cheng, Eric C. K.
2011-01-01
This paper aims to validate a theoretical model for developing teacher collective learning by using a quasi-experimental design, and explores the management strategies that would provide a school administrator practical steps to effectively promote collective learning in the school organization. Twenty aided secondary schools in Hong Kong were…
Networking Theories by Iterative Unpacking
ERIC Educational Resources Information Center
Koichu, Boris
2014-01-01
An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…
Meditative Vail Painting: A Finnish Creative Arts Therapist's Transpersonal Journey
ERIC Educational Resources Information Center
Sky Hiltunen, Sirkku M.
2006-01-01
Anthroposophy has made the spiritual a living experience by producing numerous practical applications, such as veil painting, initially created by Liane Collot d'Herbois (1988). Its theoretical framework has been substantially simplified by the author and crucial meditative and contemplative steps have been added. European and American…
Design and Analysis of Linear Fault-Tolerant Permanent-Magnet Vernier Machines
Xu, Liang; Liu, Guohai; Du, Yi; Liu, Hu
2014-01-01
This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis. PMID:24982959
A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain
NASA Astrophysics Data System (ADS)
Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.
2018-05-01
The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.
Development and validation of the Cancer Exercise Stereotypes Scale.
Falzon, Charlène; Sabiston, Catherine; Bergamaschi, Alessandro; Corrion, Karine; Chalabaev, Aïna; D'Arripe-Longueville, Fabienne
2014-01-01
The objective of this study was to develop and validate a French-language questionnaire measuring stereotypes related to exercise in cancer patients: The Cancer Exercise Stereotypes Scale (CESS). Four successive steps were carried out with 806 participants. First, a preliminary version was developed on the basis of the relevant literature and qualitative interviews. A test of clarity then led to the reformulation of six of the 30 items. Second, based on the modification indices of the first confirmatory factorial analysis, 11 of the 30 initial items were deleted. A new factorial structure analysis showed a good fit and validated a 19-item instrument with five subscales. Third, the stability of the instrument was tested over time. Last, tests of construct validity were conducted to examine convergent validity and discriminant validity. The French-language CESS appears to have good psychometric qualities and can be used to test theoretical tenets and inform intervention strategies on ways to foster exercise in cancer patients.
NASA Astrophysics Data System (ADS)
de Brito, A. C. F.; Correa, R. S.; Pinto, A. A.; Matos, M. J. S.; Tenorio, J. C.; Taylor, J. G.; Cazati, T.
2018-07-01
Isoxazoles have well established biological activities but, have been underexplored as synthetic intermediates for applications in materials science. The aims of this work are to synthesis a novel isoxazole and analyze its structural and photophysical properties for application in electronic organic materials. The novel bis (phenylisoxazolyl) benzene compound was synthesized in four steps and characterized by NMR, high resolution mass spectrometry, differential thermal analysis, infrared spectroscopy, cyclic voltammetry, ultraviolet-visible spectroscopy, fluorescence spectroscopy, DFT and TDDFT calculations. The molecule presented optical absorption in the ultraviolet region (from 290 nm to 330 nm), with maximum absorption length centered at 306 nm. The molar extinction coefficients (ε), fluorescence emission spectra and quantum efficiencies in chloroform and dimethylformamide solution were determined. Cyclic voltammetry analysis was carried out for estimating the HOMO energy level and these properties make it desirable material for photovoltaic device applications. Finally, the excited-state properties of present compound were calculated by time-dependent density functional theory (TDDFT).
Analysis of opposed jet hydrogen-air counter flow diffusion flame
NASA Technical Reports Server (NTRS)
Ho, Y. H.; Isaac, K. M.
1989-01-01
A computational simulation of the opposed-jet diffusion flame is performed to study its structure and extinction limits. The present analysis concentrates on the nitrogen-diluted hydrogen-air diffusion flame, which provides the basic information for many vehicle designs such as the aerospace plane for which hydrogen is a candidate as the fuel. The computer program uses the time-marching technique to solve the energy and species equations coupled with the momentum equation solved by the collocation method. The procedure is implemented in two stages. In the first stage, a one-step forward overal chemical reaction is chosen with the gas phase chemical reaction rate determined by comparison with experimental data. In the second stage, a complete chemical reaction mechanism is introduced with detailed thermodynamic and transport property calculations. Comparison between experimental extinction data and theoretical predictions is discussed. The effects of thermal diffusion as well as Lewis number and Prandtl number variations on the diffusion flame are also presented.
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
Design and analysis of linear fault-tolerant permanent-magnet vernier machines.
Xu, Liang; Ji, Jinghua; Liu, Guohai; Du, Yi; Liu, Hu
2014-01-01
This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis.
A Novel Interdisciplinary Approach to Socio-Technical Complexity
NASA Astrophysics Data System (ADS)
Bassetti, Chiara
The chapter presents a novel interdisciplinary approach that integrates micro-sociological analysis into computer-vision and pattern-recognition modeling and algorithms, the purpose being to tackle socio-technical complexity at a systemic yet micro-grounded level. The approach is empirically-grounded and both theoretically- and analytically-driven, yet systemic and multidimensional, semi-supervised and computable, and oriented towards large scale applications. The chapter describes the proposed approach especially as for its sociological foundations, and as applied to the analysis of a particular setting --i.e. sport-spectator crowds. Crowds, better defined as large gatherings, are almost ever-present in our societies, and capturing their dynamics is crucial. From social sciences to public safety management and emergency response, modeling and predicting large gatherings' presence and dynamics, thus possibly preventing critical situations and being able to properly react to them, is fundamental. This is where semi/automated technologies can make the difference. The work presented in this chapter is intended as a scientific step towards such an objective.
NASA Technical Reports Server (NTRS)
Johnson, C. B.
1980-01-01
The time varying effect of nonadiabatic wall conditions on boundary layer properties was studied for a two dimensional wing section and an axisymmetric fuselage. The wing and fuselage sections are representative of the wing root chord and fuselage of a typical transport model for the National Transonic Facility. The analysis was made with a solid wing and three fuselage configurations (one solid and two hollow with varying skin thicknesses) all made from AISI type 310S stainless steel. The displacement thickness and local skin friction were investigated at a station on the model in terms of the time required for these two boundary layer properties to reach an adiabatic wall condition after a 50 K step change in total temperature. The analysis was made for a free stream Mach number of 0.85, a total temperature of 117 K, and stagnation pressures of 2, 6, and 9 atm.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
Trace detection of oxygen--ionic liquids in gas sensor design.
Baltes, N; Beyle, F; Freiner, S; Geier, F; Joos, M; Pinkwart, K; Rabenecker, P
2013-11-15
This paper presents a novel electrochemical membrane sensor on basis of ionic liquids for trace analysis of oxygen in gaseous atmospheres. The faradaic response currents for the reduction of oxygen which were obtained by multiple-potential-step-chronoamperometry could be used for real time detection of oxygen down to concentrations of 30 ppm. The theoretical limit of detection was 5 ppm. The simple, non-expensive sensors varied in electrolyte composition and demonstrated a high sensitivity, a rapid response time and an excellent reproducibility at room temperature. Some of them were continuously used for at least one week and first results promise good long term stability. Voltammetric, impedance and oxygen detection studies at temperatures up to 200 °C (in the presence and absence of humidity and CO2) revealed also the limitations of certain ionic liquids for some electrochemical high temperature applications. Application areas of the developed sensors are control and analysis processes of non oxidative and oxygen free atmospheres. Copyright © 2013 Elsevier B.V. All rights reserved.
Magnesium replacement in formaldehyde: Theoretical rovibrational analysis of X ∼ 3B1 MgCH2
NASA Astrophysics Data System (ADS)
Bassett, Matthew K.; Fortenberry, Ryan C.
2018-02-01
A full, anharmonic set of fundamental vibrational frequencies as well as spectrosocpic constants are provided at high-level for X ∼ 3B1 MgCH2 for the first time. The present data are in line with previous computational and Ar-matrix results, but the anharmonic data show that two brightest frequencies, ν4 and ν5 , are nearly coincident with one another at 560 cm-1. Hence, this area is the best spectral region to search for signatures of this molecule. The rotational constants are also provided indicating a near-prolate rotational progression which should aid in microwave/millimeter-wave analysis of this molecule. Magnesium is known to be a significant component of the Earth, and molecules containing it may be more common in the interstellar medium/circumstellar media than previously thought. More spectral characterization of such molecules like MgCH2 should be undertaken, and this work is a step in that direction.
Urbanski, John Paul; Levitan, Jeremy A; Burch, Damian N; Thorsen, Todd; Bazant, Martin Z
2007-05-15
Recent numerical and experimental studies have investigated the increase in efficiency of microfluidic ac electro-osmotic pumps by introducing nonplanar geometries with raised steps on the electrodes. In this study, we analyze the effect of the step height on ac electro-osmotic pump performance. AC electro-osmotic pumps with three-dimensional electroplated steps are fabricated on glass substrates and pumping velocities of low ionic strength electrolyte solutions are measured systematically using a custom microfluidic device. Numerical simulations predict an improvement in pump performance with increasing step height, at a given frequency and voltage, up to an optimal step height, which qualitatively matches the trend observed in experiment. For a broad range of step heights near the optimum, the observed flow is much faster than with existing planar pumps (at the same voltage and minimum feature size) and in the theoretically predicted direction of the "fluid conveyor belt" mechanism. For small step heights, the experiments also exhibit significant flow reversal at the optimal frequency, which cannot be explained by the theory, although the simulations predict weak flow reversal at higher frequencies due to incomplete charging. These results provide insight to an important parameter for the design of nonplanar electro-osmotic pumps and clues to improve the fundamental theory of ACEO.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Escobedo-González, René; Méndez-Albores, Abraham; Villarreal-Barajas, Tania; Aceves-Hernández, Juan Manuel; Miranda-Ruvalcaba, René; Nicolás-Vázquez, Inés
2016-01-01
Theoretical studies of 8-chloro-9-hydroxy-aflatoxin B1 (2) were carried out by Density Functional Theory (DFT). This molecule is the reaction product of the treatment of aflatoxin B1 (1) with hypochlorous acid, from neutral electrolyzed water. Determination of the structural, electronic and spectroscopic properties of the reaction product allowed its theoretical characterization. In order to elucidate the formation process of 2, two reaction pathways were evaluated—the first one considering only ionic species (Cl+ and OH−) and the second one taking into account the entire hypochlorous acid molecule (HOCl). Both pathways were studied theoretically in gas and solution phases. In the first suggested pathway, the reaction involves the addition of chlorenium ion to 1 forming a non-classic carbocation assisted by anchimeric effect of the nearest aromatic system, and then a nucleophilic attack to the intermediate by the hydroxide ion. In the second studied pathway, as a first step, the attack of the double bond from the furanic moiety of 1 to the hypochlorous acid is considered, accomplishing the same non-classical carbocation, and again in the second step, a nucleophilic attack by the hydroxide ion. In order to validate both reaction pathways, the atomic charges, the highest occupied molecular orbital and the lowest unoccupied molecular orbital were obtained for both substrate and product. The corresponding data imply that the C9 atom is the more suitable site of the substrate to interact with the hydroxide ion. It was demonstrated by theoretical calculations that a vicinal and anti chlorohydrin is produced in the terminal furan ring. Data of the studied compound indicate an important reduction in the cytotoxic and genotoxic potential of the target molecule, as demonstrated previously by our research group using different in vitro assays. PMID:27455324
Mechanisms of sampling interstitial fluid from skin using a microneedle patch.
Samant, Pradnya P; Prausnitz, Mark R
2018-05-01
Although interstitial fluid (ISF) contains biomarkers of physiological significance and medical interest, sampling of ISF for clinical applications has made limited impact due to a lack of simple, clinically useful techniques that collect more than nanoliter volumes of ISF. This study describes experimental and theoretical analysis of ISF transport from skin using microneedle (MN) patches and demonstrates collection of >1 µL of ISF within 20 min in pig cadaver skin and living human subjects using an optimized system. MN patches containing arrays of submillimeter solid, porous, or hollow needles were used to penetrate superficial skin layers and access ISF through micropores (µpores) formed upon insertion. Experimental studies in pig skin found that ISF collection depended on transport mechanism according to the rank order diffusion < capillary action < osmosis < pressure-driven convection, under the conditions studied. These findings were in agreement with independent theoretical modeling that considered transport within skin, across the interface between skin and µpores, and within µpores to the skin surface. This analysis indicated that the rate-limiting step for ISF sampling is transport through the dermis. Based on these studies and other considerations like safety and convenience for future clinical use, we designed an MN patch prototype to sample ISF using suction as the driving force. Using this approach, we collected ISF from human volunteers and identified the presence of biomarkers in the collected ISF. In this way, sampling ISF from skin using an MN patch could enable collection of ISF for use in research and medicine.
González-García, C; Tudela, P; Ruz, M
2014-04-01
The use of functional magnetic resonance imaging (fMRI) has represented an important step forward for the neurosciences. Nevertheless, it has also been subject to rather a lot of criticism. To study the most widespread criticism against fMRI, so that researchers who are starting to use it may know the different elements that must be taken into account to be able to take a suitable approach to this technique. The fact that fMRI allows brain activity to be observed makes it a very attractive and useful tool, and its use has grown exponentially since the last decade of the 20th century. At the same time, criticism against its use has become especially fierce. Most of this scepticism can be classified into aspects related with the technique and physiology, the analysis of data and their theoretical interpretation. In this study we will review the main arguments defended in each of these three areas, as well as looking at whether they are well-founded or not. Additionally, this work is also intended as a reference for novel researchers when it comes to identifying elements that must be taken into account as they approach fMRI. Despite the fact that fMRI is one of the most interesting options for observing the brain available today, its correct utilisation requires a great deal of control and knowledge. Even so, today most of the criticism it receives no longer has any solid foundation on which to stand.
Formal linguistics as a cue to demographic history.
Longobardi, Giuseppe; Ceolin, Andrea; Ecay, Aaron; Ghirotto, Silvia; Guardiano, Cristina; Irimia, Monica-Alexandrina; Michelioudakis, Dimitris; Radkevich, Nina; Pettener, Davide; Luiselli, Donata; Barbujani, Guido
2016-06-20
Beyond its theoretical success, the development of molecular genetics has brought about the possibility of extraordinary progress in the study of classification and in the inference of the evolutionary history of many species and populations. A major step forward was represented by the availability of extremely large sets of molecular data suited to quantitative and computational treatments. In this paper, we argue that even in cognitive sciences, purely theoretical progress in a discipline such as linguistics may have analogous impact. Thus, exactly on the model of molecular biology, we propose to unify two traditionally unrelated lines of linguistic investigation: 1) the formal study of syntactic variation (parameter theory) in the biolinguistic program; 2) the reconstruction of relatedness among languages (phylogenetic taxonomy). The results of our linguistic analysis have thus been plotted against data from population genetics and the correlations have turned out to be largely significant: given a non-trivial set of languages/populations, the description of their variation provided by the comparison of systematic parametric analysis and molecular anthropology informatively recapitulates their history and relationships. As a result, we can claim that the reality of some parametric model of the language faculty and language acquisition/transmission (more broadly of generative grammar) receives strong and original support from its historical heuristic power. Then, on these grounds, we can begin testing Darwin's prediction that, when properly generated, the trees of human populations and of their languages should eventually turn out to be significantly parallel.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
Child development: analysis of a new concept.
de Souza, Juliana Martins; Veríssimo, Maria de La Ó Ramallo
2015-01-01
To perform concept analysis of the term child development (CD) and submit it to review by experts. Analysis of concept according to the hybrid model, in three phases: theoretical phase, with literature review; field phase of qualitative research with professionals who care for children; and analytical phase, of articulation of data from previous steps, based on the bioecological theory of development. The new definition was analyzed by experts in a focus group. Project approved by the Research Ethics Committee. We reviewed 256 articles, from 12 databases and books, and interviewed 10 professionals, identifying that: The CD concept has as antecedents aspects of pregnancy, factors of the child, factors of context, highlighting the relationships and child care, and social aspects; its consequences can be positive or negative, impacting on society; its attributes are behaviors and abilities of the child; its definitions are based on maturation, contextual perspectives or both. The new definition elaborated in concept analysis was validated by nine experts in focus group. It expresses the magnitude of the phenomenon and factors not presented in other definitions. The research produced a new definition of CD that can improve nursing classifications for the comprehensive care of the child.
Child development: analysis of a new concept1
de Souza, Juliana Martins; Veríssimo, Maria de La Ó Ramallo
2015-01-01
Objectives: to perform concept analysis of the term child development (CD) and submit it to review by experts. Method: analysis of concept according to the hybrid model, in three phases: theoretical phase, with literature review; field phase of qualitative research with professionals who care for children; and analytical phase, of articulation of data from previous steps, based on the bioecological theory of development. The new definition was analyzed by experts in a focus group. Project approved by the Research Ethics Committee. Results: we reviewed 256 articles, from 12 databases and books, and interviewed 10 professionals, identifying that: The CD concept has as antecedents aspects of pregnancy, factors of the child, factors of context, highlighting the relationships and child care, and social aspects; its consequences can be positive or negative, impacting on society; its attributes are behaviors and abilities of the child; its definitions are based on maturation, contextual perspectives or both. The new definition elaborated in concept analysis was validated by nine experts in focus group. It expresses the magnitude of the phenomenon and factors not presented in other definitions. Conclusion: the research produced a new definition of CD that can improve nursing classifications for the comprehensive care of the child. PMID:26626001
[Model of Analysis and Prevention of Accidents - MAPA: tool for operational health surveillance].
de Almeida, Ildeberto Muniz; Vilela, Rodolfo Andrade de Gouveia; da Silva, Alessandro José Nunes; Beltran, Sandra Lorena
2014-12-01
The analysis of work-related accidents is important for accident surveillance and prevention. Current methods of analysis seek to overcome reductionist views that see these occurrences as simple events explained by operator error. The objective of this paper is to analyze the Model of Analysis and Prevention of Accidents (MAPA) and its use in monitoring interventions, duly highlighting aspects experienced in the use of the tool. The descriptive analytical method was used, introducing the steps of the model. To illustrate contributions and or difficulties, cases where the tool was used in the context of service were selected. MAPA integrates theoretical approaches that have already been tried in studies of accidents by providing useful conceptual support from the data collection stage until conclusion and intervention stages. Besides revealing weaknesses of the traditional approach, it helps identify organizational determinants, such as management failings, system design and safety management involved in the accident. The main challenges lie in the grasp of concepts by users, in exploring organizational aspects upstream in the chain of decisions or at higher levels of the hierarchy, as well as the intervention to change the determinants of these events.
[Causal analysis approaches in epidemiology].
Dumas, O; Siroux, V; Le Moual, N; Varraso, R
2014-02-01
Epidemiological research is mostly based on observational studies. Whether such studies can provide evidence of causation remains discussed. Several causal analysis methods have been developed in epidemiology. This paper aims at presenting an overview of these methods: graphical models, path analysis and its extensions, and models based on the counterfactual approach, with a special emphasis on marginal structural models. Graphical approaches have been developed to allow synthetic representations of supposed causal relationships in a given problem. They serve as qualitative support in the study of causal relationships. The sufficient-component cause model has been developed to deal with the issue of multicausality raised by the emergence of chronic multifactorial diseases. Directed acyclic graphs are mostly used as a visual tool to identify possible confounding sources in a study. Structural equations models, the main extension of path analysis, combine a system of equations and a path diagram, representing a set of possible causal relationships. They allow quantifying direct and indirect effects in a general model in which several relationships can be tested simultaneously. Dynamic path analysis further takes into account the role of time. The counterfactual approach defines causality by comparing the observed event and the counterfactual event (the event that would have been observed if, contrary to the fact, the subject had received a different exposure than the one he actually received). This theoretical approach has shown limits of traditional methods to address some causality questions. In particular, in longitudinal studies, when there is time-varying confounding, classical methods (regressions) may be biased. Marginal structural models have been developed to address this issue. In conclusion, "causal models", though they were developed partly independently, are based on equivalent logical foundations. A crucial step in the application of these models is the formulation of causal hypotheses, which will be a basis for all methodological choices. Beyond this step, statistical analysis tools recently developed offer new possibilities to delineate complex relationships, in particular in life course epidemiology. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Introduction to bioinformatics.
Can, Tolga
2014-01-01
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Delaminated graphene at silicon carbide facets: atomic scale imaging and spectroscopy.
Nicotra, Giuseppe; Ramasse, Quentin M; Deretzis, Ioannis; La Magna, Antonino; Spinella, Corrado; Giannazzo, Filippo
2013-04-23
Atomic-resolution structural and spectroscopic characterization techniques (scanning transmission electron microscopy and electron energy loss spectroscopy) are combined with nanoscale electrical measurements (conductive atomic force microscopy) to study at the atomic scale the properties of graphene grown epitaxially through the controlled graphitization of a hexagonal SiC(0001) substrate by high temperature annealing. This growth technique is known to result in a pronounced electron-doping (∼10(13) cm(-2)) of graphene, which is thought to originate from an interface carbon buffer layer strongly bound to the substrate. The scanning transmission electron microscopy analysis, carried out at an energy below the knock-on threshold for carbon to ensure no damage is imparted to the film by the electron beam, demonstrates that the buffer layer present on the planar SiC(0001) face delaminates from it on the (112n) facets of SiC surface steps. In addition, electron energy loss spectroscopy reveals that the delaminated layer has a similar electronic configuration to purely sp2-hybridized graphene. These observations are used to explain the local increase of the graphene sheet resistance measured around the surface steps by conductive atomic force microscopy, which we suggest is due to significantly lower substrate-induced doping and a resonant scattering mechanism at the step regions. A first-principles-calibrated theoretical model is proposed to explain the structural instability of the buffer layer on the SiC facets and the resulting delamination.
Dissociative Ionization and Product Distributions of Benzene and Pyridine by Electron Impact
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Huo, Winifred M.; Fletcher, Graham D.
2003-01-01
We report a theoretical study of the dissociative ionization (DI) and product distributions of benzene (C6H6) and pyridine (C5H5N) from their low-lying ionization channels. Our approach makes use of the fact that electronic motion is much faster than nuclear motion allowing DI to be treated as a two-step process. The first step is the electron-impact ionization resulting in an ion with the same nuclear geometry as the neutral molecule. In the second step, the nuclei relax from the initial geometry and undergo unimolecular dissociation. For the ionization process we use the improved binary-encounter dipole (iBED) model [W.M. Huo, Phys. Rev. A64,042719-I (2001)]. For the unimolecular dissociation, we use multiconfigurational self-consistent field (MCSCF) methods to determine the steepest descent pathways to the possible product channels. More accurate methods are then used to obtain better energetics of the paths which are used to determine unimolecular dissociation probabilities and product distributions. Our analysis of the dissociation products and the thresholds of their productions for benzene are compared with the recent dissociative photoionization meausurements of benzene by Feng et al. [R. Feng, G. Cooper, C.E. Brion, J. Electron Spectrosc. Relat. Phenom. 123,211 (2002)] and the dissociative photoionization measurements of pyridine by Tixier et al. [S. Tixier, G. Cooper, R. Feng, C.E. Brion, J. Electron Spectrosc. Relat. Phenom. 123,185 (2002)] using dipole (e,e+ion) coincidence spectroscopy.
Physically representative atomistic modeling of atomic-scale friction
NASA Astrophysics Data System (ADS)
Dong, Yalin
Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the interesting physical process is buried between the two contact interfaces, thus makes a direct measurement more difficult. Atomistic simulation is able to simulate the process with the dynamic information of each single atom, and therefore provides valuable interpretations for experiments. In this, we will systematically to apply Molecular Dynamics (MD) simulation to optimally model the Atomic Force Microscopy (AFM) measurement of atomic friction. Furthermore, we also employed molecular dynamics simulation to correlate the atomic dynamics with the friction behavior observed in experiments. For instance, ParRep dynamics (an accelerated molecular dynamic technique) is introduced to investigate velocity dependence of atomic friction; we also employ MD simulation to "see" how the reconstruction of gold surface modulates the friction, and the friction enhancement mechanism at a graphite step edge. Atomic stick-slip friction can be treated as a rate process. Instead of running a direction simulation of the process, we can apply transition state theory to predict its property. We will have a rigorous derivation of velocity and temperature dependence of friction based on the Prandtl-Tomlinson model as well as transition theory. A more accurate relation to prediction velocity and temperature dependence is obtained. Furthermore, we have included instrumental noise inherent in AFM measurement to interpret two discoveries in experiments, suppression of friction at low temperature and the attempt frequency discrepancy between AFM measurement and theoretical prediction. We also discuss the possibility to treat wear as a rate process.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
The challenge of identifying greenhouse gas-induced climatic change
NASA Technical Reports Server (NTRS)
Maccracken, Michael C.
1992-01-01
Meeting the challenge of identifying greenhouse gas-induced climatic change involves three steps. First, observations of critical variables must be assembled, evaluated, and analyzed to determine that there has been a statistically significant change. Second, reliable theoretical (model) calculations must be conducted to provide a definitive set of changes for which to search. Third, a quantitative and statistically significant association must be made between the projected and observed changes to exclude the possibility that the changes are due to natural variability or other factors. This paper provides a qualitative overview of scientific progress in successfully fulfilling these three steps.
NASA Astrophysics Data System (ADS)
Borne, Adrien; Katsura, Tomotaka; Félix, Corinne; Doppagne, Benjamin; Segonds, Patricia; Bencheikh, Kamel; Levenson, Juan Ariel; Boulanger, Benoit
2016-01-01
Several third-harmonic generation processes were performed in a single step-index germanium-doped silica optical fiber under intermodal phase-matching conditions. The nanosecond fundamental beam range between 1400 and 1600 nm. The transverse distributions of the energy were successfully modeled in the form of Ince-Gauss modes, pointing out some ellipticity of fiber core. From these experiments and theoretical calculations, we discuss the implementation of frequency degenerated triple photon generation that shares the same phase-matching condition as third-harmonic generation, which is its reverse process.
First characterization of a static Fourier transform spectrometer
NASA Astrophysics Data System (ADS)
Lacan, A.; Bréon, F.-M.; Rosak, A.; Pierangelo, C.
2017-11-01
A new instrument concept for a Static Fourier Transform Spectrometer has been developed and characterized by CNES. This spectrometer is based on a Michelson interferometer concept, but a system of stepped mirrors generates all interference path differences simultaneously, without any moving parts. The instrument permits high spectral resolution measurements (≍0.1 cm-1) adapted to the sounding and the monitoring of atmospheric gases. Moreover, its overall dimensions are compatible with a micro satellite platform. The stepped mirrors are glued using a molecular bonding technique. An interference filter selects a waveband only a few nanometers wide. It limits the number of sampling points (and consequently the steps number) necessary to achieve the high resolution. The instrument concept can be optimized for the detection and the monitoring of various atmospheric constituents. CNES has developed a version whose measurements are centered on the CO2 absorption lines at 1573 nm (6357 cm-1). This model has a theoretical resolution of 40 pm (0.15 cm-1) within a 5 nm (22.5 cm-1) wide spectral window. It is aimed at the feasibility demonstration for atmospheric CO2 column measurements with a very demanding accuracy of better than 1%. Preliminary measurements indicate that, although high quality spectra are obtained, the theoretical performances are not yet achieved. We discuss the causes for the achieved performances and describe foreseen methods for their improvements.
Liu, Yue-Hong; Wu, Zheng-Yun; Yang, Jian; Yuan, Yu-Ju; Zhang, Wen-Xue
2014-01-01
Distillers' grains are a co-product of ethanol production. In China, only a small portion of distillers' grains have been used to feed the livestock because the amount was so huge. Nowadays, it has been reported that the distillers' grains have the potential for fuel ethanol production because they are composed of lignocelluloses and residual starch. In order to effectively convert distillers' grains to fuel ethanol and other valuable production, sodium hydroxide pretreatment, step-by-step enzymatic hydrolysis, and simultaneous saccharification and fermentation (SSF) were investigated. The residual starch was first recycled from wet distillers' grains (WDG) with glucoamylase to obtain glucose-rich liquid. The total sugar concentration was 21.3 g/L, and 111.9% theoretical starch was hydrolyzed. Then the removed-starch dry distillers' grains (RDDG) were pretreated with NaOH under optimal conditions and the pretreated dry distillers' grains (PDDG) were used for xylanase hydrolysis. The xylose concentration was 19.4 g/L and 68.6% theoretical xylose was hydrolyzed. The cellulose-enriched dry distillers' grains (CDDG) obtained from xylanase hydrolysis were used in SSF for ethanol production. The ethanol concentration was 42.1 g/L and the ethanol productivity was 28.7 g/100 g CDDG. After the experiment, approximately 80.6% of the fermentable sugars in WDG was converted to ethanol.
Transmission of a detonation across a density interface
NASA Astrophysics Data System (ADS)
Tang Yuk, K. C.; Mi, X. C.; Lee, J. H. S.; Ng, H. D.
2018-05-01
The present study investigates the transmission of a detonation wave across a density interface. The problem is first studied theoretically considering an incident Chapman-Jouguet (CJ) detonation wave, neglecting its detailed reaction-zone structure. It is found that, if there is a density decrease at the interface, a transmitted strong detonation wave and a reflected expansion wave would be formed; if there is a density increase, one would obtain a transmitted CJ detonation wave followed by an expansion wave and a reflected shock wave. Numerical simulations are then performed considering that the incident detonation has the Zel'dovich-von Neumann-Döring reaction-zone structure. The transient process that occurs subsequently to the detonation-interface interaction has been captured by the simulations. The effects of the magnitude of density change across the interface and different reaction kinetics (i.e., single-step Arrhenius kinetics vs. two-step induction-reaction kinetics) on the dynamics of the transmission process are explored. After the transient relaxation process, the transmitted wave reaches the final state in the new medium. For the cases with two-step induction-reaction kinetics, the transmitted wave fails to evolve to a steady detonation wave if the magnitude of density increase is greater than a critical value. For the cases wherein the transmitted wave can evolve to a steady detonation, the numerical results for both reaction models give final propagation states that agree with the theoretical solutions.
He, Shu; Yan, Guozheng; Wang, Zhiwu; Gao, Jinyang; Yang, Kai
2015-07-01
Robotic endoscopes with locomotion ability are among the most promising alternatives to traditional endoscopes; the locomotion ability is an important factor when evaluating the performance of the robot. This article describes the research on the characteristics of an expanding-extending robotic endoscope's locomotion efficiency in real intestine and explores an approach to improve the locomotion ability in this environment. In the article, the robot's locomotion efficiency was first calculated according to its gait in the gut, and the reasons for step losses were analyzed. Next, dynamical models of the robot and the intestine were built to calculate the step losses caused by failed anchoring and intestinal compression/extension. Based on the models and the calculation results, methods for reducing step losses were proposed. Finally, a series of ex vivo experiments were carried out, and the actual locomotion efficiency of the robot was analyzed on the basis of the theoretical models. In the experiment, on a level platform, the locomotion efficiency of the robot varied between 34.2% and 63.7%; the speed of the robot varied between 0.62 and 1.29 mm/s. The robot's efficiency when climbing a sloping intestine was also tested and analyzed. The proposed theoretical models and experimental results provide a good reference for improving the design of robotic endoscopy. © IMechE 2015.
NASA Astrophysics Data System (ADS)
Braun, Jürgen; Minár, Ján; Ebert, Hubert
2018-04-01
Various apparative developments extended the potential of angle-resolved photoemission spectroscopy tremendously during the last two decades. Modern experimental arrangements consisting of new photon sources, analyzers and detectors supply not only extremely high angle and energy resolution but also spin resolution. This provides an adequate platform to study in detail new materials like low-dimensional magnetic structures, Rashba systems, topological insulator materials or high TC superconductors. The interest in such systems has grown enormously not only because of their technological relevance but even more because of exciting new physics. Furthermore, the use of photon energies from few eV up to several keV makes this experimental technique a rather unique tool to investigate the electronic properties of solids and surfaces. The following article reviews the corresponding recent theoretical developments in the field of angle-resolved photoemission with a special emphasis on correlation effects, temperature and relativistic aspects. The most successful theoretical approach to deal with angle-resolved photoemission is the so-called spectral function or one-step formulation of the photoemission process. Nowadays, the one-step model allows for photocurrent calculations for photon energies ranging from a few eV to more than 10 keV, to deal with arbitrarily ordered and disordered systems, to account for finite temperatures, and considering in addition strong correlation effects within the dynamical mean-field theory or similar advanced approaches.
Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi
2008-10-01
Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.
A Finite Element Procedure for Calculating Fluid-Structure Interaction Using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Chargin, Mladen; Gartmeier, Otto
1990-01-01
This report is intended to serve two purposes. The first is to present a survey of the theoretical background of the dynamic interaction between a non-viscid, compressible fluid and an elastic structure is presented. Section one presents a short survey of the application of the finite element method (FEM) to the area of fluid-structure-interaction (FSI). Section two describes the mathematical foundation of the structure and fluid with special emphasis on the fluid. The main steps in establishing the finite element (FE) equations for the fluid structure coupling are discussed in section three. The second purpose is to demonstrate the application of MSC/NASTRAN to the solution of FSI problems. Some specific topics, such as fluid structure analogy, acoustic absorption, and acoustic contribution analysis are described in section four. Section five deals with the organization of the acoustic procedure flowchart. Section six includes the most important information that a user needs for applying the acoustic procedure to practical FSI problems. Beginning with some rules concerning the FE modeling of the coupled system, the NASTRAN USER DECKs for the different steps are described. The goal of section seven is to demonstrate the use of the acoustic procedure with some examples. This demonstration includes an analytic verification of selected FE results. The analytical description considers only some aspects of FSI and is not intended to be mathematically complete. Finally, section 8 presents an application of the acoustic procedure to vehicle interior acoustic analysis with selected results.
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
WE-G-BRA-06: Application of Systems and Control Theory-Based Hazard Analysis to Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlicki, T; Samost, A; Leveson, N
Purpose: The process of delivering radiation occurs in a complex socio-technical system heavily reliant on human operators. Furthermore, both humans and software are notoriously challenging to account for in traditional hazard analysis models. High reliability industries such as aviation have approached this problem through using hazard analysis techniques grounded in systems and control theory. The purpose of this work is to apply the Systems Theoretic Accident Model Processes (STAMP) hazard model to radiotherapy. In particular, the System-Theoretic Process Analysis (STPA) approach is used to perform a hazard analysis of a proposed on-line adaptive cranial radiosurgery procedure that omits the CTmore » Simulation step and uses only CBCT for planning, localization, and treatment. Methods: The STPA procedure first requires the definition of high-level accidents and hazards leading to those accidents. From there, hierarchical control structures were created followed by the identification and description of control actions for each control structure. Utilizing these control structures, unsafe states of each control action were created. Scenarios contributing to unsafe control action states were then identified and translated into system requirements to constrain process behavior within safe boundaries. Results: Ten control structures were created for this new CBCT-only process which covered the areas of hospital and department management, treatment design and delivery, and vendor service. Twenty three control actions were identified that contributed to over 80 unsafe states of those control actions resulting in over 220 failure scenarios. Conclusion: The interaction of people, hardware, and software are highlighted through the STPA approach. STPA provides a hierarchical model for understanding the role of management decisions in impacting system safety so that a process design requirement can be traced back to the hazard and accident that it is intended to mitigate. Varian Medical Systems, Inc.« less
Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces
NASA Astrophysics Data System (ADS)
Abu-Alqumsan, Mohammad; Peer, Angelika
2016-06-01
Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.
Application of systems and control theory-based hazard analysis to radiation oncology.
Pawlicki, Todd; Samost, Aubrey; Brown, Derek W; Manger, Ryan P; Kim, Gwe-Ya; Leveson, Nancy G
2016-03-01
Both humans and software are notoriously challenging to account for in traditional hazard analysis models. The purpose of this work is to investigate and demonstrate the application of a new, extended accident causality model, called systems theoretic accident model and processes (STAMP), to radiation oncology. Specifically, a hazard analysis technique based on STAMP, system-theoretic process analysis (STPA), is used to perform a hazard analysis. The STPA procedure starts with the definition of high-level accidents for radiation oncology at the medical center and the hazards leading to those accidents. From there, the hierarchical safety control structure of the radiation oncology clinic is modeled, i.e., the controls that are used to prevent accidents and provide effective treatment. Using STPA, unsafe control actions (behaviors) are identified that can lead to the hazards as well as causal scenarios that can lead to the identified unsafe control. This information can be used to eliminate or mitigate potential hazards. The STPA procedure is demonstrated on a new online adaptive cranial radiosurgery procedure that omits the CT simulation step and uses CBCT for localization, planning, and surface imaging system during treatment. The STPA procedure generated a comprehensive set of causal scenarios that are traced back to system hazards and accidents. Ten control loops were created for the new SRS procedure, which covered the areas of hospital and department management, treatment design and delivery, and vendor service. Eighty three unsafe control actions were identified as well as 472 causal scenarios that could lead to those unsafe control actions. STPA provides a method for understanding the role of management decisions and hospital operations on system safety and generating process design requirements to prevent hazards and accidents. The interaction of people, hardware, and software is highlighted. The method of STPA produces results that can be used to improve safety and prevent accidents and warrants further investigation.
NASA Astrophysics Data System (ADS)
Liu, Eric; Ko, Akiteru; O'Meara, David; Mohanty, Nihar; Franke, Elliott; Pillai, Karthik; Biolsi, Peter
2017-05-01
Dimension shrinkage has been a major driving force in the development of integrated circuit processing over a number of decades. The Self-Aligned Quadruple Patterning (SAQP) technique is widely adapted for sub-10nm node in order to achieve the desired feature dimensions. This technique provides theoretical feasibility of multiple pitch-halving from 193nm immersion lithography by using various pattern transferring steps. The major concept of this approach is to a create spacer defined self-aligned pattern by using single lithography print. By repeating the process steps, double, quadruple, or octuple are possible to be achieved theoretically. In these small architectures, line roughness control becomes extremely important since it may contribute to a significant portion of process and device performance variations. In addition, the complexity of SAQP in terms of processing flow makes the roughness improvement indirective and ineffective. It is necessary to discover a new approach in order to improve the roughness in the current SAQP technique. In this presentation, we demonstrate a novel method to improve line roughness performances on 30nm pitch SAQP flow. We discover that the line roughness performance is strongly related to stress management. By selecting different stress level of film to be deposited onto the substrate, we can manipulate the roughness performance in line and space patterns. In addition, the impact of curvature change by applied film stress to SAQP line roughness performance is also studied. No significant correlation is found between wafer curvature and line roughness performance. We will discuss in details the step-by-step physical performances for each processing step in terms of critical dimension (CD)/ critical dimension uniformity (CDU)/line width roughness (LWR)/line edge roughness (LER). Finally, we summarize the process needed to reach the full wafer performance targets of LWR/LER in 1.07nm/1.13nm on 30nm pitch line and space pattern.
Scholastic Guide to Balanced Reading 3-6: Making It Work for You.
ERIC Educational Resources Information Center
Baltas, Joyce, Ed.; Shafer, Susan, Ed.
Suggesting the need for a balance between literature and intentional skills instruction, this book provides grade 3-6 teachers and administrators with a theoretical base for creating a balanced reading program and gives educators a chance to step into actual classrooms where teachers have successfully implemented effective programs. Each chapter…
The Case for Sustainable Laboratories: First Steps at Harvard University
ERIC Educational Resources Information Center
Woolliams, Jessica; Lloyd, Matthew; Spengler, John D.
2005-01-01
Purpose: Laboratories typically consume 4-5 times more energy than similarly-sized commercial space. This paper adds to a growing dialogue about how to "green" a laboratory's design and operations. Design/methodology/approach: The paper is divided into three sections. The first section reviews the background and theoretical issues. A…
The Development of a Model of Culturally Responsive Science and Mathematics Teaching
ERIC Educational Resources Information Center
Hernandez, Cecilia M.; Morales, Amanda R.; Shroyer, M. Gail
2013-01-01
This qualitative theoretical study was conducted in response to the current need for an inclusive and comprehensive model to guide the preparation and assessment of teacher candidates for culturally responsive teaching. The process of developing a model of culturally responsive teaching involved three steps: a comprehensive review of the…
ERIC Educational Resources Information Center
Jackowski, Edward M.
1988-01-01
Discusses the role that information resource management (IRM) plays in educational program-oriented budgeting (POB), and presents a theoretical IRM model. Highlights include design considerations for integrated data systems; database management systems (DBMS); and how POB data can be integrated to enhance its value and use within an educational…
Inclusion in Education: A Step towards Social Justice
ERIC Educational Resources Information Center
Polat, Filiz
2011-01-01
This article discusses the theoretical relationships between inclusion in education and social justice. It draws on Martha Nussbaum's use of the capability approach is given as one of the few philosophical and political theories that places disability/impairment in the social justice debate. The article goes on to present findings from the initial…
Barriers to College Access for Latino/a Adolescents: A Comparison of Theoretical Frameworks
ERIC Educational Resources Information Center
Gonzalez, Laura M.
2015-01-01
A comprehensive description of barriers to college access for Latino/a adolescents is an important step toward improving educational outcomes. However, relevant scholarship on barriers has not been synthesized in a way that promotes coherent formulation of intervention strategies or constructive scholarly discussion. The goal of this article is to…
Exploring and Implementing Participatory Action Research
ERIC Educational Resources Information Center
Savin-Baden, Maggi; Wimpenny, Katherine
2007-01-01
This paper explores the research method of participatory action research, first by examining the roots of this approach and then analysing the shift to using more participatory approaches than in former years. It begins by considering the reasoning and theoretical underpinning for adopting this approach and provides an overview of the steps to be…
Shaping the Educational Policy Field: "Cross-Field Effects" in the Chinese Context
ERIC Educational Resources Information Center
Yu, Hui
2018-01-01
This paper theorises how politics, economy and migrant population policies influence educational policy, utilising Bourdieusian theoretical resources to analyse the Chinese context. It develops the work of Lingard and Rawolle on cross-field effects and produces an updated three-step analytical framework. Taking the policy issue of the schooling of…
Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Chan, Wai
2005-01-01
Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
Using Excel's Solver Function to Facilitate Reciprocal Service Department Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.
2013-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated and theoretically incorrect direct or step-down methods. This article illustrates how Excel's Solver…
ERIC Educational Resources Information Center
Nye, Christopher D.; Drasgow, Fritz
2011-01-01
Because of the practical, theoretical, and legal implications of differential item functioning (DIF) for organizational assessments, studies of measurement equivalence are a necessary first step before scores can be compared across individuals from different groups. However, commonly recommended criteria for evaluating results from these analyses…
Continuing Challenges for a Systemic Theory of Gifted Education
ERIC Educational Resources Information Center
Schorer, Jorg; Baker, Joseph
2012-01-01
Ziegler and Phillipson make a strong case for the need to reconsider traditional models of gifted education. Although their evidence and argument are compelling, the reviewers argue that several additional steps are needed to justify the theoretical foundation of the theory in order to facilitate its evaluation by researchers. First, Ziegler and…
Steps to Promote Open and Authentic Dialogue between Teachers and School Management
ERIC Educational Resources Information Center
Klein, Joseph
2017-01-01
Purpose: School principals must determine educational policies and make information-based decisions. Teachers have authentic information that they do not transmit in full to the principals. A theoretical model was tested that explains the factors behind this disconnection in communication. Design: Four hundred and forty-five teachers completed…
First Steps in Character Education
ERIC Educational Resources Information Center
Hill, Patty Smith
2017-01-01
For many centuries, there has been theoretical statements asserting faith in character as the main objective in education, but it is only comparatively recently enough has been known about character and the scientific conditions for its development to make an honored place for it in the curricula. Today, it is creeping into curricula under new…
Emphasizing Language and Visualization in Teaching Linear Algebra
ERIC Educational Resources Information Center
Hannah, John; Stewart, Sepideh; Thomas, Mike
2013-01-01
Linear algebra with its rich theoretical nature is a first step towards advanced mathematical thinking for many undergraduate students. In this paper, we consider the teaching approach of an experienced mathematician as he attempts to engage his students with the key ideas embedded in a second-year course in linear algebra. We describe his…
Rubrics--Sharing the Rules of the Game
ERIC Educational Resources Information Center
Balch, David; Blanck, Robert; Balch, David Howard
2016-01-01
The topic and purpose of this paper is to explore within the literature the theoretical foundations and applications of rubrics in the process of evaluation of retained learning and mastery of knowledge within the educational environment. The first step of the research was to assemble a definition of the term rubric from a historical perspective.…
How to Design, Analyze, and Write Doctoral or Masters Research. Second Edition.
ERIC Educational Resources Information Center
Balian, Edward S.
A practical guide to conducting and reporting graduate-level research projects outlines each step in the research process and discusses both practical and theoretical considerations. Chapter 1 addresses idea and topic development. Chapter 2 discusses the purposes, procedures, and sources for literature reviews and searches. In chapter 3, the…
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Unifying theory for terrestrial research infrastructures
NASA Astrophysics Data System (ADS)
Mirtl, Michael
2016-04-01
The presentation will elaborate on basic steps needed for building a common theoretical base between Research Infrastructures focusing on terrestrial ecosystems. This theoretical base is needed for developing a better cooperation and integrating in the near future. An overview of different theories will be given and ways to a unifying approach explored. In the second step more practical implications of a theory-guided integration will be developed alongside the following guiding questions: • How do the existing and planned European environmental RIs map on a possible unifying theory on terrestrial ecosystems (covered structures and functions, scale; overlaps and gaps) • Can a unifying theory improve the consistent definition of RÍs scientific scope and focal science questions? • How could a division of tasks between RIs be organized in order to minimize parallel efforts? • Where concretely do existing and planned European environmental RIs need to interact to respond to overarching questions (top down component)? • What practical fora and mechanisms (across RIs) would be needed to bridge the gap between PI driven (bottom up) efforts and the centralistic RI design and operations?
NASA Astrophysics Data System (ADS)
Olea-Azar, C.; Abarca, B.; Norambuena, E.; Opazo, L.; Jullian, C.; Valencia, S.; Ballesteros, R.; Chadlaoui, M.
2008-11-01
The electron spin resonance (ESR) spectra of free radicals obtained by electrolytic reduction of triazolopyridyl pyridyl ketones and dipyridyl ketones derivatives were measured in dimethylsulfoxide (DMSO). The hyperfine patterns indicate that the spin density delocalization is dependent of the rings presented in the molecule. The electrochemistry of these compounds was characterized using cyclic voltammetry, in DMSO as solvent. When one carbonyl is present in the molecule one step in the reduction mechanism was observed while two carbonyl are present two steps were detected. The first wave was assigned to the generation of the correspondent free radical species, and the second wave was assigned to the dianion derivatives. The phase-solubility measurements indicated an interaction between molecules selected and cyclodextrins in water. These inclusion complexes are 1:1 with βCD, and HP-βCD. The values of Ks showed a different kind of complexes depending on which rings are included. AM1 and DFT calculations were performed to obtain the optimized geometries, theoretical hyperfine constants, and spin distributions, respectively. The theoretical results are in complete agreement with the experimental ones.
Ethanol production from lignocellulosic byproducts of olive oil extraction.
Ballesteros, I; Oliva, J M; Saez, F; Ballesteros, M
2001-01-01
The recent implementation of a new two-step centrifugation process for extracting olive oil in Spain has substantially reduced water consumption, thereby eliminating oil mill wastewater. However, a new high sugar content residue is still generated. In this work the two fractions present in the residue (olive pulp and fragmented stones) were assayed as substrate for ethanol production by the simultaneous saccharification and fermentation (SSF) process. Pretreatment of fragmented olive stones by sulfuric acid-catalyzed steam explosion was the most effective treatment for increasing enzymatic digestibility; however, a pretreatment step was not necessary to bioconvert the olive pulp into ethanol. The olive pulp and fragmented olive stones were tested by the SSF process using a fed-batch procedure. By adding the pulp three times at 24-h intervals, 76% of the theoretical SSF yield was obtained. Experiments with fed-batch pretreated olive stones provided SSF yields significantly lower than those obtained at standard SSF procedure. The preferred SSF conditions to obtain ethanol from olives stones (61% of theoretical yield) were 10% substrate and addition of cellulases at 15 filter paper units/g of substrate.
EEG-Neurofeedback as a Tool to Modulate Cognition and Behavior: A Review Tutorial
Enriquez-Geppert, Stefanie; Huster, René J.; Herrmann, Christoph S.
2017-01-01
Neurofeedback is attracting renewed interest as a method to self-regulate one’s own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed. PMID:28275344
Leong, Frederick T; Lee, Szu-Hui
2006-01-01
As an extension of F. T. L. Leong's (1996) integrative model, this article presents the cultural accommodation model (CAM), an enhanced theoretical guide to effective cross-cultural clinical practice and research. Whereas F. T. L. Leong's model identifies the importance of integrating the universal, group, and individual dimensions, the CAM takes the next step by providing a theoretical guide to effective psychotherapy with culturally different clients by means of a cultural accommodation process. This model argues for the importance of selecting and applying culture-specific constructs when working with culturally diverse groups. The first step of the CAM is to identify cultural disparities that are often ignored and then accommodate them by using current culturally specific concepts. In this article, several different cultural "gaps" or culture-specific constructs of relevance to Asian Americans with strong scientific foundations are selected and discussed as they pertain to providing effective psychotherapy to this ethnic minority group. Finally, a case study is incorporated to illustrate application of the CAM. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
An information theory analysis of spatial decisions in cognitive development
Scott, Nicole M.; Sera, Maria D.; Georgopoulos, Apostolos P.
2015-01-01
Performance in a cognitive task can be considered as the outcome of a decision-making process operating across various knowledge domains or aspects of a single domain. Therefore, an analysis of these decisions in various tasks can shed light on the interplay and integration of these domains (or elements within a single domain) as they are associated with specific task characteristics. In this study, we applied an information theoretic approach to assess quantitatively the gain of knowledge across various elements of the cognitive domain of spatial, relational knowledge, as a function of development. Specifically, we examined changing spatial relational knowledge from ages 5 to 10 years. Our analyses consisted of a two-step process. First, we performed a hierarchical clustering analysis on the decisions made in 16 different tasks of spatial relational knowledge to determine which tasks were performed similarly at each age group as well as to discover how the tasks clustered together. We next used two measures of entropy to capture the gradual emergence of order in the development of relational knowledge. These measures of “cognitive entropy” were defined based on two independent aspects of chunking, namely (1) the number of clusters formed at each age group, and (2) the distribution of tasks across the clusters. We found that both measures of entropy decreased with age in a quadratic fashion and were positively and linearly correlated. The decrease in entropy and, therefore, gain of information during development was accompanied by improved performance. These results document, for the first time, the orderly and progressively structured “chunking” of decisions across the development of spatial relational reasoning and quantify this gain within a formal information-theoretic framework. PMID:25698915
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Søndergaard, Anders Aspegren; Shepperson, Benjamin; Stapelfeldt, Henrik
2017-07-07
We present an efficient, noise-robust method based on Fourier analysis for reconstructing the three-dimensional measure of the alignment degree, ⟨cos 2 θ⟩, directly from its two-dimensional counterpart, ⟨cos 2 θ 2D ⟩. The method applies to nonadiabatic alignment of linear molecules induced by a linearly polarized, nonresonant laser pulse. Our theoretical analysis shows that the Fourier transform of the time-dependent ⟨cos 2 θ 2D ⟩ trace over one molecular rotational period contains additional frequency components compared to the Fourier transform of ⟨cos 2 θ⟩. These additional frequency components can be identified and removed from the Fourier spectrum of ⟨cos 2 θ 2D ⟩. By rescaling of the remaining frequency components, the Fourier spectrum of ⟨cos 2 θ⟩ is obtained and, finally, ⟨cos 2 θ⟩ is reconstructed through inverse Fourier transformation. The method allows the reconstruction of the ⟨cos 2 θ⟩ trace from a measured ⟨cos 2 θ 2D ⟩ trace, which is the typical observable of many experiments, and thereby provides direct comparison to calculated ⟨cos 2 θ⟩ traces, which is the commonly used alignment metric in theoretical descriptions. We illustrate our method by applying it to the measurement of nonadiabatic alignment of I 2 molecules. In addition, we present an efficient algorithm for calculating the matrix elements of cos 2 θ 2D and any other observable in the symmetric top basis. These matrix elements are required in the rescaling step, and they allow for highly efficient numerical calculation of ⟨cos 2 θ 2D ⟩ and ⟨cos 2 θ⟩ in general.
Petri net modelling of gene regulation of the Duchenne muscular dystrophy.
Grunwald, Stefanie; Speer, Astrid; Ackermann, Jörg; Koch, Ina
2008-05-01
Searching for therapeutic strategies for Duchenne muscular dystrophy, it is of great interest to understand the responsible molecular pathways down-stream of dystrophin completely. For this reason we have performed real-time PCR experiments to compare mRNA expression levels of relevant genes in tissues of affected patients and controls. To bring experimental data in context with the underlying pathway theoretical models are needed. Modelling of biological processes in the cell at higher description levels is still an open problem in the field of systems biology. In this paper, a new application of Petri net theory is presented to model gene regulatory processes of Duchenne muscular dystrophy. We have developed a Petri net model, which is based mainly on own experimental and literature data. We distinguish between up- and down-regulated states of gene expression. The analysis of the model comprises the computation of structural and dynamic properties with focus on a thorough T-invariant analysis, including clustering techniques and the decomposition of the network into maximal common transition sets (MCT-sets), which can be interpreted as functionally related building blocks. All possible pathways, which reflect the complex net behaviour in dependence of different gene expression patterns, are discussed. We introduce Mauritius maps of T-invariants, which enable, for example, theoretical knockout analysis. The resulted model serves as basis for a better understanding of pathological processes, and thereby for planning next experimental steps in searching for new therapeutic possibilities. Free availability of the Petri net editor and animator Snoopy and the clustering tool PInA via http://www-dssz.informatik.tu-cottbus.de/~ wwwdssz/. The Petri net models used can be accessed via http://www.tfh-berlin.de/bi/duchenne/.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
a Weighted Local-World Evolving Network Model Based on the Edge Weights Preferential Selection
NASA Astrophysics Data System (ADS)
Li, Ping; Zhao, Qingzhen; Wang, Haitang
2013-05-01
In this paper, we use the edge weights preferential attachment mechanism to build a new local-world evolutionary model for weighted networks. It is different from previous papers that the local-world of our model consists of edges instead of nodes. Each time step, we connect a new node to two existing nodes in the local-world through the edge weights preferential selection. Theoretical analysis and numerical simulations show that the scale of the local-world affect on the weight distribution, the strength distribution and the degree distribution. We give the simulations about the clustering coefficient and the dynamics of infectious diseases spreading. The weight dynamics of our network model can portray the structure of realistic networks such as neural network of the nematode C. elegans and Online Social Network.
Is Ego Depletion Real? An Analysis of Arguments.
Friese, Malte; Loschelder, David D; Gieseler, Karolin; Frankenbach, Julius; Inzlicht, Michael
2018-03-01
An influential line of research suggests that initial bouts of self-control increase the susceptibility to self-control failure (ego depletion effect). Despite seemingly abundant evidence, some researchers have suggested that evidence for ego depletion was the sole result of publication bias and p-hacking, with the true effect being indistinguishable from zero. Here, we examine (a) whether the evidence brought forward against ego depletion will convince a proponent that ego depletion does not exist and (b) whether arguments that could be brought forward in defense of ego depletion will convince a skeptic that ego depletion does exist. We conclude that despite several hundred published studies, the available evidence is inconclusive. Both additional empirical and theoretical works are needed to make a compelling case for either side of the debate. We discuss necessary steps for future work toward this aim.
[Systematization of nursing care in urgency and emergency services: feasibility of implementation].
Maria, Monica Antonio; Quadros, Fátima Alice Aguiar; Grassi, Maria de Fátima Oliveira
2012-01-01
This study analyzes the feasibility of implementing the Nursing Care Systematization in an emergency and urgency hospital department. This is a field study, descriptive, qualitative structured according to the content analysis described by Bardin (2009). It was performed in a hospital specialized in emergency care. The sample consisted of eight practical nurses, five nurses and two assistants, all of them with experience of at least six months in the emergency room. The difficulties referred to the implementation of the NCS are: complexity in their steps; disinterest of the institution; theoretical unpreparedness of nursing, its devaluation by other professionals, inadequate sizing of employees and inadequacy of the hospital physical structure. In this context, it was note that the nurse loses representation in the health team and the application of SAE turns out to be often underestimated.
Electron path control of high-order harmonic generation by a spatially inhomogeneous field
NASA Astrophysics Data System (ADS)
Mohebbi, Masoud; Nazarpoor Malaei, Sakineh
2016-04-01
We theoretically investigate the control of high-order harmonics cut-off and as-pulse generation by a chirped laser field using a metallic bow tie-shaped nanostructure. The numerical results show that the trajectories of the electron wave packet are strongly modified, the short quantum path is enhanced, the long quantum path is suppressed and the low modulated spectrum of the harmonics can be remarkably extended. Our calculated results also show that, by confining electron motion, a broadband supercontinuum with the width of 1670 eV can be produced which directly generates an isolated 34 as-pulse without phase compensation. To explore the underlying mechanism responsible for the cut-off extension and the quantum path selection, we perform time-frequency analysis and a classical simulation based on the three-step model.
Toward a methodology for moral decision making in medicine.
Kushner, T; Belliotti, R A; Buckner, D
1991-12-01
The failure of medical codes to provide adequate guidance for physicians' moral dilemmas points to the fact that some rules of analysis, informed by moral theory, are needed to assist in resolving perplexing ethical problems occurring with increasing frequency as medical technology advances. Initially, deontological and teleological theories appear more helpful, but criticisms can be lodged against both, and neither proves to be sufficient in itself. This paper suggests that to elude the limitations of previous approaches, a method of moral decision making must be developed incorporating both coherence methodology and some independently supported theoretical foundations. Wide Reflective Equilibrium is offered, and its process described along with a theory of the person which is used to animate the process. Steps are outlined to be used in the process, leading to the application of the method to an actual case.
Gravitational lensing by clusters of galaxies - Constraining the mass distribution
NASA Technical Reports Server (NTRS)
Miralda-Escude, Jordi
1991-01-01
The possibility of placing constraints on the mass distribution of a cluster of galaxies by analyzing the cluster's gravitational lensing effect on the images of more distant galaxies is investigated theoretically in the limit of weak distortion. The steps in the proposed analysis are examined in detail, and it is concluded that detectable distortion can be produced by clusters with line-of-sight velocity dispersions of over 500 km/sec. Hence it should be possible to determine (1) the cluster center position (with accuracy equal to the mean separation of the background galaxies), (2) the cluster-potential quadrupole moment (to within about 20 percent of the total potential if velocity dispersion is 1000 km/sec), and (3) the power law for the outer-cluster density profile (if enough background galaxies in the surrounding region are observed).
TiO2-catalyzed synthesis of sugars from formaldehyde in extraterrestrial impacts on the early Earth
Civiš, Svatopluk; Szabla, Rafał; Szyja, Bartłomiej M.; Smykowski, Daniel; Ivanek, Ondřej; Knížek, Antonín; Kubelík, Petr; Šponer, Jiří; Ferus, Martin; Šponer, Judit E.
2016-01-01
Recent synthetic efforts aimed at reconstructing the beginning of life on our planet point at the plausibility of scenarios fueled by extraterrestrial energy sources. In the current work we show that beyond nucleobases the sugar components of the first informational polymers can be synthesized in this way. We demonstrate that a laser-induced high-energy chemistry combined with TiO2 catalysis readily produces a mixture of pentoses, among them ribose, arabinose and xylose. This chemistry might be highly relevant to the Late Heavy Bombardment period of Earth’s history about 4–3.85 billion years ago. In addition, we present an in-depth theoretical analysis of the most challenging step of the reaction pathway, i.e., the TiO2-catalyzed dimerization of formaldehyde leading to glycolaldehyde. PMID:26979666